This guide draws in part from “Applying AI to Clinical Practice: Considerations, Barriers, and Opportunities” by Alexandra Tomei, M.Ed., BCBA, LBA (TX), LSSWB (BehaviorLive), and extends it with peer-reviewed research from our library of 27,900+ ABA research articles. Citations, clinical framing, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →Artificial intelligence technologies are rapidly transforming healthcare delivery, and applied behavior analysis is no exception. This course, presented by Alexandra Tomei, addresses the ethical and practical considerations for implementing AI tools in ABA practice, guided by the CASP Practice Parameters for Artificial Intelligence. As AI tools become more accessible and more powerful, behavior analysts face critical decisions about when and how to integrate these technologies into their clinical work — decisions that carry significant implications for service quality, clinical fidelity, and ethical practice.
The clinical significance of AI in ABA is multifaceted. On the opportunity side, AI tools have the potential to enhance clinical decision-making through pattern recognition in large datasets, streamline administrative workflows that consume practitioner time, support data analysis and visualization, assist with treatment plan development and progress monitoring, and extend the reach of clinical expertise to underserved populations. These capabilities, if implemented effectively, could address some of the field's most pressing challenges — including workforce shortages, variability in service quality, and the administrative burden that reduces time available for direct clinical work.
On the risk side, poorly implemented AI tools could compromise clinical judgment, introduce systematic biases into treatment planning, undermine the individualized, function-based approach that characterizes high-quality ABA, reduce the development of clinical expertise by creating dependency on algorithmic recommendations, and raise privacy and data security concerns. The stakes are high because AI-generated recommendations, unlike human clinical judgments, may carry an unwarranted aura of objectivity that discourages critical evaluation.
The CASP Practice Parameters for Artificial Intelligence provide a framework for navigating these considerations. This course explores how organizations can ensure that AI implementation aligns with ethical standards, how to identify and address barriers to responsible implementation, and how to leverage AI technologies in ways that enhance rather than replace clinical skill.
The integration of AI into healthcare is a broad trend that has accelerated significantly in recent years. Large language models, machine learning algorithms, computer vision, and natural language processing are being applied across medical specialties for tasks ranging from diagnostic support to treatment planning to administrative automation. ABA has begun to engage with these technologies, though the field's adoption has been comparatively cautious — reflecting both the profession's commitment to evidence-based practice and the unique challenges of applying AI to behavioral services.
Several categories of AI application are relevant to ABA practice. Clinical decision support tools use machine learning to analyze client data and suggest assessment or treatment modifications. Natural language processing tools can assist with treatment plan writing, progress note documentation, and report generation. Data analysis tools can identify patterns in behavioral data that might not be apparent through visual analysis alone. Administrative AI can streamline scheduling, billing, and authorization processes.
The CASP Practice Parameters for Artificial Intelligence — referenced in this course — provide practice-specific guidance for how behavior analysts should evaluate and implement AI tools. These parameters address questions of clinical fidelity, data privacy, practitioner competence, and the boundaries between AI-assisted and AI-directed clinical practice. They represent an important step toward establishing professional standards for AI use in behavioral services.
The current landscape includes both commercially available AI tools marketed to ABA providers and custom applications developed by individual organizations. The quality, validation, and ethical compliance of these tools vary widely, and practitioners face the challenge of evaluating AI claims without an established framework for doing so. This course provides that framework, helping practitioners distinguish between AI tools that genuinely enhance clinical practice and those that may compromise it.
The clinical implications of AI in ABA practice depend heavily on how these tools are implemented. When AI is used as a clinical support tool — providing data analysis, identifying patterns, and generating suggestions that practitioners evaluate critically — it has the potential to enhance decision-making without compromising clinical autonomy. When AI is positioned as a replacement for clinical judgment — generating treatment recommendations that practitioners implement without critical evaluation — the risks to individualized, function-based practice are significant.
For clinical decision-making, AI tools that analyze behavioral data can potentially identify trends and patterns that support more timely intervention modifications. However, behavior analysts must evaluate whether the patterns identified by AI are clinically meaningful, whether the data inputs are accurate and representative, and whether the AI's recommendations are consistent with function-based reasoning. AI tools trained on aggregate data may produce recommendations that are statistically typical but inappropriate for a specific client whose behavior is maintained by atypical variables.
For treatment plan development, AI-generated content raises questions about individualization and clinical fidelity. A treatment plan that is largely generated by AI may be technically coherent but may not reflect the nuanced understanding of the client that comes from direct assessment and ongoing clinical observation. Practitioners who rely on AI-generated treatment plans must ensure that the final product reflects their own clinical judgment and knowledge of the individual client.
For data analysis, AI tools offer genuine opportunities to enhance the precision and efficiency of behavioral data interpretation. Machine learning algorithms can process larger datasets and detect subtler patterns than visual analysis alone. However, behavior analysts must understand the limitations of these tools — including the potential for false positives, the influence of data quality on analysis accuracy, and the importance of clinical interpretation of statistical outputs.
For administrative functions, AI has perhaps the least controversial and most immediately beneficial applications. Streamlining documentation, scheduling, and billing processes frees practitioner time for clinical work and reduces the administrative burden that contributes to burnout.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
The ethical considerations for AI in ABA practice are extensive and interconnected with multiple elements of the BACB Ethics Code. Code 2.01 (evidence-based practice) requires that the tools and methods used in clinical practice be supported by evidence. For AI tools, this means evaluating whether the specific tool being considered has been validated for use with the relevant population and clinical application. Many AI tools marketed to ABA providers have not been subjected to independent validation, and marketing claims should not be accepted as evidence of effectiveness.
Code 1.05 (competence) requires that practitioners understand the tools they use well enough to evaluate their outputs critically. For AI tools, this means understanding the general principles of how the tool works, what data it uses, what assumptions it makes, and what limitations it has. Practitioners do not need to understand the technical details of machine learning algorithms, but they do need to understand enough to evaluate whether the tool's outputs are clinically appropriate for a specific client.
The CASP Practice Parameters emphasize that AI should enhance clinical skill rather than replace it. This principle is critical. If AI tools are implemented in ways that allow practitioners to skip the analytical reasoning that develops clinical expertise — simply accepting AI recommendations without understanding why they are made — the field risks producing practitioners who are dependent on AI tools and unable to function effectively without them.
Data privacy and security are significant ethical concerns. AI tools typically require access to client data, which raises questions about how that data is stored, transmitted, processed, and protected. The BACB Ethics Code's provisions regarding confidentiality apply to data shared with AI platforms, and practitioners must ensure that their use of AI tools does not compromise client privacy.
Informed consent should include information about the use of AI tools in clinical practice. Clients and families have a right to know when AI is being used to inform treatment decisions, what data is being shared with AI platforms, and how AI outputs are integrated with the practitioner's clinical judgment.
Finally, the potential for algorithmic bias is an ethical concern that deserves attention. AI systems trained on biased data will produce biased outputs. If the training data for an ABA-focused AI tool underrepresents certain populations, the tool's recommendations may be less accurate or appropriate for those populations. Practitioners should be aware of this potential and should evaluate AI recommendations with particular care for clients from underrepresented groups.
Organizations and practitioners considering AI implementation should conduct a thorough assessment before adopting any AI tool. This assessment should evaluate the tool's evidence base — has it been independently validated for the intended application and population? It should examine the data requirements — what client data does the tool need, how is that data processed, and what privacy protections are in place? It should evaluate the tool's transparency — can the practitioner understand why the tool produces specific recommendations, or is it a black box that provides outputs without explanation?
Decision-making about AI implementation should follow a structured process. First, identify the specific clinical or administrative need that the AI tool is intended to address. AI should solve a defined problem, not be adopted because it is available or trendy. Second, evaluate whether the AI tool addresses that need more effectively than existing alternatives. Third, assess the ethical implications of implementation, including privacy, consent, bias, and impact on clinical skill development. Fourth, plan the implementation with appropriate training, monitoring, and feedback mechanisms.
When evaluating AI outputs in clinical practice, practitioners should apply the same critical thinking they would apply to any clinical recommendation. Does the AI recommendation align with functional assessment data? Is it consistent with the research literature? Does it account for the client-specific variables that the practitioner knows from direct clinical observation? When AI recommendations conflict with clinical judgment, the practitioner should analyze the discrepancy carefully — the AI may have identified a pattern the practitioner missed, or it may be producing an inappropriate recommendation based on incomplete or unrepresentative data.
Implementation fidelity is a critical consideration. AI tools that are used inconsistently, applied to inappropriate cases, or used without adequate training may produce poor outcomes that are attributed to the technology rather than to the implementation. Organizations should establish clear protocols for AI use, train practitioners on both the capabilities and limitations of the tools, and monitor outcomes systematically.
AI is coming to ABA practice whether individual practitioners embrace it or resist it. The question is not whether you will encounter AI tools in your clinical work, but whether you will be prepared to evaluate and use them ethically. Begin by developing AI literacy — understanding the general capabilities and limitations of AI tools, the ethical considerations for their use in healthcare, and the specific guidance provided by the CASP Practice Parameters and the BACB Ethics Code.
When evaluating AI tools for your practice or organization, prioritize evidence over marketing. Ask whether the tool has been independently validated, what data it requires and how that data is protected, whether the tool's recommendations are transparent and explainable, and whether it is designed to enhance clinical skill or replace it. Be skeptical of tools that claim to automate clinical judgment — the individualized, function-based reasoning that characterizes good ABA practice is not easily replicated by algorithms.
If you adopt AI tools, implement them as clinical supports rather than clinical replacements. Use AI-generated data analysis, treatment suggestions, and documentation drafts as starting points for clinical reasoning, not as final products. Maintain your independent clinical judgment and document the rationale for your decisions, regardless of whether they align with or diverge from AI recommendations.
Ensure that your use of AI tools is covered by informed consent. Clients and families should know when AI is being used to inform their treatment, what data is being shared with AI platforms, and how AI outputs interact with clinical judgment. Transparency builds trust and supports the therapeutic relationship.
Contribute to the field's understanding of AI in ABA by documenting your implementation experiences, evaluating outcomes, and sharing both successes and challenges. The field needs practice-based evidence to complement the technical development of AI tools, and practitioners who implement AI thoughtfully and evaluate it rigorously are essential contributors to that evidence base.
Finally, invest in developing the clinical skills that AI cannot replace. The ability to build therapeutic relationships, conduct nuanced functional assessments, make context-sensitive clinical judgments, and navigate ethical complexities remains fundamentally human. AI tools are most valuable when they support practitioners who have strong clinical foundations — not when they compensate for practitioners who do not.
Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.
Applying AI to Clinical Practice: Considerations, Barriers, and Opportunities — Alexandra Tomei · 1 BACB Ethics CEUs · $25
Take This Course →We extended this guide with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
279 research articles with practitioner takeaways
244 research articles with practitioner takeaways
239 research articles with practitioner takeaways
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.