These answers draw in part from “Smarter Coaching: Using AI to Support RBT and Paraeducator Training” by Sarah Heiniger, PhD, BCBA-D (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →The highest-utility AI applications for RBT and paraeducator supervision include: generating individualized SMART goals aligned with BACB task list competency areas based on specific RBT performance profiles; creating first-draft coaching scripts for specific behavioral procedures that the supervisor can review and personalize; producing self-study materials — procedure summaries, knowledge checks, case scenarios — that supplement direct coaching between observation visits; summarizing performance data trends when the BCBA provides structured data inputs; and generating training outlines or competency checklists for new staff orientation. Each of these applications reduces administrative preparation time, freeing the BCBA's clinical attention and supervision time for the judgment-based dimensions of supervision that AI cannot assist with.
Effective SMART goal prompts provide the AI with sufficient behavioral specificity to generate targeted rather than generic outputs. A high-quality prompt includes: the supervisee's role and experience level, the specific skill or task area being targeted, current performance data in behavioral terms (e.g., 'correctly implements extinction procedure on 3 of 5 observed opportunities'), the clinical context and population, and any relevant constraints (supervision frequency, available practice opportunities). Including task list item numbers when relevant increases the precision of the output. After generation, review the AI output against your direct knowledge of the RBT's performance profile — AI-generated goals are starting points that require clinical refinement.
The primary confidentiality risk is submitting identifiable client or supervisee information to AI systems that process data through cloud-based infrastructure without appropriate data handling protections. BACB Code 2.04 and Code 2.05 require protection of client information, and many AI platforms do not provide the data handling guarantees required for identifiable clinical information. BCBAs should de-identify all data before AI submission — removing client names, identifiers, location information, and any other elements that could identify an individual. Supervision applications should describe scenarios and performance patterns without identifying specific clients. BCBAs should review the data handling and privacy policies of any AI platform before using it for supervision-related tasks.
This is an emerging ethical question in the field. BACB Code 1.01's integrity requirement and Code 4.04's genuine development obligations suggest that transparency about AI use in supervision is consistent with the spirit of the Ethics Code. Supervisees have a legitimate interest in understanding how their supervision content was developed, particularly if AI-generated content were to contain errors. Disclosure does not need to be formal or extensive — 'I used an AI tool to generate a first draft of these goals, which I reviewed and revised' is sufficient. What is not consistent with integrity is representing AI-generated content as entirely the product of the BCBA's own judgment, particularly for content that shapes supervisee development.
No. Direct observation is irreplaceable as a supervision tool, and no AI system currently available can substitute for it. Direct observation provides real-time information about implementation fidelity, staff-client interaction quality, environmental variables affecting performance, and the interpersonal dynamics that influence coaching effectiveness. AI tools can reduce the preparation time associated with observations, help analyze data collected during observations, and support the development of feedback following observations. But the observation itself — watching a human implement a behavioral procedure with a client, noting what is accurate and what needs correction — requires a trained human clinician. BACB supervision requirements specify direct observation minimums for this reason.
Evaluating AI-generated coaching content requires applying the same quality standards you would apply to content from any source. Assess whether behavioral procedures are described accurately and in sufficient procedural detail. Verify that the terminology is correct and consistent with your agency's established language for these procedures. Check that feedback suggestions are specific and behavioral rather than evaluative and vague. Identify any clinical inaccuracies — misapplied behavioral principles, incorrect procedure steps, overgeneralized recommendations — and correct them before use. Treat AI output as a capable but fallible first draft that requires expert clinical review, not as an authoritative clinical resource.
Paraeducators in school settings present a somewhat different supervision context than clinic-based RBTs. They typically have less formal training in behavioral principles, operate in more complex multi-student environments, receive even less direct coaching support, and have a different professional identity framework. AI applications for paraeducators are particularly valuable for producing highly visual, plain-language procedure summaries that can be referenced during implementation without extensive technical background. SMART goal frameworks for paraeducators should be adapted to their professional context — educational rather than clinical terminology — while maintaining behavioral specificity. The limited direct coaching time available for paraeducators makes AI-generated self-study materials especially high-value in this context.
Correct it and do not use the incorrect version. AI systems can produce confident-sounding content that contains factual errors, misapplied behavioral principles, or procedural inaccuracies. This is a known limitation of current AI technology. When you identify an error in AI-generated supervision content, correct it before use, note what type of error it was so you can watch for similar errors in future outputs, and adjust your review process if a particular type of content consistently requires correction. Under no circumstances should AI-generated clinical inaccuracies reach supervisees without correction — the BCBA bears full responsibility for the quality of all supervision content, regardless of its source.
Competence in ethical AI use for supervision requires developing several skill sets: understanding the technical capabilities and limitations of current AI tools, including their tendency to produce confident but sometimes inaccurate outputs; developing effective prompt-writing skills through deliberate practice; establishing personal and organizational policies for data privacy and confidentiality; and maintaining critical evaluation skills for AI-generated content. Professional development in this area is available through conference presentations, continuing education courses, and self-directed study of AI ethics literature from adjacent fields. BCBAs should approach AI tool adoption with the same evidence-based orientation they apply to any new clinical tool: learn the evidence, understand the limitations, establish explicit protocols, and monitor outcomes.
Heiniger's framework supports the judicious and ethically grounded adoption of AI tools for specific, well-defined supervision tasks where the efficiency benefit is real and the clinical oversight is robust. The recommendation is not to adopt AI wholesale or to avoid it entirely, but to assess each potential application against a clear ethical framework: What is the specific task? What is the confidentiality risk? What is the quality review process? What is the AI doing versus what is the BCBA doing? Applications that reduce administrative preparation time while maintaining BCBA clinical oversight and judgment are appropriate. Applications that substitute AI judgment for BCBA clinical responsibility are not. With clear policies and genuine oversight, AI can be a meaningful tool for sustaining and improving coaching quality in under-resourced supervision contexts.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Smarter Coaching: Using AI to Support RBT and Paraeducator Training — Sarah Heiniger · 1 BACB Supervision CEUs · $20
Take This Course →We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
280 research articles with practitioner takeaways
279 research articles with practitioner takeaways
258 research articles with practitioner takeaways
1 BACB Supervision CEUs · $20 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.