These answers draw in part from “Considerations for Ethically Implementing AI to Advance Clinical Skills” by Alexandra Tomei, M.Ed., BCBA, LBA (TX), LSSWB (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →Current AI tools for ABA fall into several categories: documentation assistants that help draft session notes and treatment plans, data visualization and analysis tools that identify patterns in behavioral data, scheduling optimization platforms, clinical decision support systems that suggest intervention modifications, and general-purpose AI assistants used informally for writing and research. The sophistication and validation of these tools varies enormously, from peer-reviewed clinical platforms to consumer chatbots repurposed for clinical use without formal evaluation.
Using consumer AI tools for clinical documentation carries significant risks. Client information shared with these tools may not be protected to HIPAA standards, as the AI provider may store, process, or use the data for model training. AI-generated treatment plans may contain plausible but inaccurate clinical content that misrepresents the client's situation. If you use AI for drafting assistance, never include identifiable client information, verify all clinical content against actual client data, and substantially revise the output to ensure it reflects your genuine clinical reasoning. Check your organization's policy before using any AI tool with clinical content.
The CASP Practice Parameters provide a framework for organizations to evaluate, pilot, and implement AI tools in ABA service delivery. They address product evaluation criteria, stakeholder engagement, pilot design, outcome monitoring, and fidelity assessment. The parameters emphasize that AI should enhance clinician skill rather than replace clinical judgment, that data security must be maintained, and that any AI implementation should be monitored for both benefits and unintended consequences. They represent the field's first organized guidance on AI integration.
The primary ethical risks include confidentiality breaches when client data is shared with AI systems, clinical errors when AI-generated recommendations are accepted without adequate human review, algorithmic bias when AI tools trained on biased data produce biased recommendations, deskilling when practitioners defer to AI rather than maintaining their own clinical competencies, and lack of transparency when AI systems make recommendations through processes that cannot be explained or audited. Each of these risks is manageable with appropriate safeguards but dangerous without them.
When AI tools process client data or contribute to clinical decisions, there is an ethical argument for disclosing this to families as part of the informed consent process. Families have a right to know how their information is being used and what role technology plays in their treatment. The specific disclosure requirements are not yet established in the Ethics Code, but transparency about AI use aligns with the code's principles. At minimum, organizations should be prepared to answer family questions about AI use honestly and completely.
Evaluate AI tools on multiple dimensions: clinical validity (does the tool produce accurate, clinically appropriate outputs?), data security (where is data stored and who has access?), transparency (is the tool's methodology documented?), evidence base (has the tool been validated through peer-reviewed research?), integration (does it fit within existing workflows?), and regulatory compliance (does it meet HIPAA and relevant standards?). Be skeptical of marketing claims and request evidence of validation. If a tool lacks published validation data, treat it as experimental and pilot it carefully.
Skill enhancement means the AI handles routine or computational tasks while the practitioner focuses on higher-order clinical reasoning. For example, an AI tool that generates a first draft of session notes based on data entries, which the practitioner then reviews, modifies, and supplements with clinical interpretation, enhances efficiency without replacing judgment. Skill replacement means the AI generates clinical outputs that the practitioner approves without substantive engagement. For example, approving an AI-generated treatment plan without reviewing it against the client's actual data replaces rather than enhances clinical thinking.
Effective pilots include: a defined objective with measurable success criteria, a small group of trained users, a time-limited implementation period, baseline data collected before the pilot begins, ongoing monitoring of both intended benefits and unintended consequences, a fidelity check to ensure the tool is being used as designed, client privacy protections, and a structured evaluation at the pilot's conclusion. Results should inform a data-driven decision about whether to expand, modify, or discontinue use of the tool.
Practitioners need training in several areas: understanding the specific AI tool's capabilities and limitations, proper procedures for data input that protect client privacy, how to evaluate AI-generated outputs for accuracy and appropriateness, when to override AI recommendations based on clinical judgment, the ethical standards governing AI use including confidentiality and informed consent, and how to maintain clinical reasoning skills alongside AI use. This training should be provided before practitioners begin using any AI tool in clinical contexts.
Current AI technology cannot replace the clinical judgment, ethical reasoning, therapeutic relationship, and contextual sensitivity that BCBAs bring to their work. AI excels at pattern recognition in large data sets, routine document generation, and optimization problems. It does not understand the human experience of having a child with autism, cannot navigate the ethical nuances of individual cases, and cannot provide the compassionate, responsive presence that therapeutic relationships require. AI will change what BCBAs do by automating routine tasks, but the core clinical competencies will remain uniquely human for the foreseeable future.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Considerations for Ethically Implementing AI to Advance Clinical Skills — Alexandra Tomei · 1 BACB Ethics CEUs · $20
Take This Course →We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
280 research articles with practitioner takeaways
279 research articles with practitioner takeaways
244 research articles with practitioner takeaways
1 BACB Ethics CEUs · $20 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.