By Matt Harrington, BCBA · Behaviorist Book Club · Research-backed answers for behavior analysts
Several categories of AI tools are relevant to ABA. Clinical decision support tools use machine learning to analyze behavioral data and suggest treatment modifications. Natural language processing tools assist with treatment plan writing, progress note documentation, and report generation. Data analysis tools identify patterns in behavioral data using statistical and machine learning methods. Administrative AI streamlines scheduling, billing, authorization tracking, and other operational tasks. The maturity and validation of these tools varies widely. Administrative AI applications tend to be the most straightforward and least ethically complex. Clinical decision support tools raise the most significant ethical considerations because they directly influence treatment decisions.
The CASP (Council of Autism Service Providers) Practice Parameters for Artificial Intelligence provide guidelines for how ABA organizations should evaluate and implement AI tools. Key principles include that AI should enhance rather than replace clinical skill, that practitioners should maintain clinical oversight of AI-generated recommendations, that data privacy and security must be protected, and that AI tools should be validated for the specific populations and applications in which they are used. These parameters represent an important step toward establishing professional standards for AI use in ABA, though they are relatively new and the field's understanding of best practices for AI implementation continues to evolve.
BCBAs should evaluate AI tools using several criteria. Evidence: Has the tool been independently validated for the intended application and population? Transparency: Can the practitioner understand why the tool produces specific recommendations? Privacy: How is client data handled, stored, and protected? Bias: Has the tool been evaluated for algorithmic bias that could affect recommendations for certain populations? Clinical fit: Does the tool support individualized, function-based practice or does it promote standardized recommendations? Skill development: Does the tool enhance the practitioner's clinical reasoning or create dependency? Marketing claims should not be accepted as evidence. Look for peer-reviewed validation studies, independent reviews, and documented outcomes from organizations that have implemented the tool.
The risks are significant. AI tools trained on aggregate data may produce recommendations that are statistically typical but inappropriate for individual clients whose behavior is maintained by atypical variables. Practitioners who routinely defer to AI recommendations without critical evaluation may experience deskilling — a reduction in their capacity for independent clinical reasoning. The individualized, function-based approach that characterizes high-quality ABA requires nuanced judgment that considers variables (family dynamics, cultural context, client preferences, environmental subtleties) that AI tools may not capture. The CASP Practice Parameters emphasize that AI should be a tool in the clinician's hands, not a replacement for the clinician. This principle should guide all implementation decisions.
Several Ethics Code elements are directly relevant. Code 2.01 (evidence-based practice) requires that AI tools be evaluated for evidence of effectiveness before adoption. Code 1.05 (competence) requires that practitioners understand the tools they use well enough to evaluate their outputs critically. Confidentiality provisions require that client data shared with AI platforms be protected appropriately. Informed consent provisions require that clients be informed about the use of AI in their treatment. The Ethics Code's overarching emphasis on client welfare provides the framework for all AI implementation decisions: does this tool serve the client's interests, and does its implementation maintain the quality and individualization of care?
Informed consent should include disclosure that AI tools are being used as part of clinical practice, a general explanation of how the tools are used (e.g., data analysis, documentation support, treatment planning assistance), information about what client data is shared with the AI platform and how it is protected, clarification that AI recommendations are reviewed and evaluated by the practitioner before being implemented, and the client's or family's right to ask questions about AI use and to request that specific tools not be used in their care. The level of detail should be appropriate for the client's or family's understanding, and the practitioner should be prepared to answer questions about how AI influences treatment decisions.
Organizations should establish clear protocols that define when and how AI tools are used, train practitioners on both the capabilities and limitations of the tools, require clinical review of all AI-generated recommendations before implementation, monitor outcomes systematically to evaluate whether AI-assisted decisions produce better or different results than standard practice, and create feedback mechanisms that allow practitioners to report concerns about AI recommendations. Implementation fidelity should be assessed regularly — evaluating whether practitioners are using the tools as intended, whether they are maintaining their independent clinical judgment, and whether the tools are producing the expected benefits.
Algorithmic bias occurs when AI systems produce systematically skewed outputs because of biases in their training data, design, or implementation. In ABA, this could manifest as AI tools that produce less accurate recommendations for clients from underrepresented populations, that reflect the biases present in the clinical records used to train them, or that prioritize outcomes valued by one cultural group over another. BCBAs should be concerned because algorithmic bias can perpetuate and amplify existing disparities in service quality. Practitioners should evaluate AI recommendations with particular care for clients from populations that may be underrepresented in the AI tool's training data, and should advocate for AI developers to address bias through diverse training data and rigorous validation across populations.
BCBAs can maintain and develop clinical skills alongside AI use by treating AI outputs as inputs into their own clinical reasoning rather than as final recommendations, by actively engaging in the analytical process even when AI tools provide shortcuts, by regularly practicing clinical skills (functional analysis, visual data analysis, treatment planning) without AI assistance, by seeking supervision and consultation that focuses on clinical reasoning development, and by documenting their own clinical rationale for decisions — not just the AI recommendation. The goal is to use AI as a tool that extends human capability rather than replaces it — similar to how a calculator extends mathematical ability without replacing the understanding of mathematical principles that makes it useful.
Responsible AI implementation involves several strategies. Start with low-risk applications (administrative tasks) before moving to clinical applications. Validate tools for your specific population and context before broad deployment. Train all users on the tool's capabilities, limitations, and ethical considerations. Maintain human oversight of all AI-influenced clinical decisions. Collect and analyze outcome data to evaluate whether the AI tool is producing the expected benefits. Establish clear escalation procedures for when AI recommendations seem inappropriate. Review and update AI-related policies regularly as the technology and understanding evolve. Organizations should also designate a point person or committee responsible for evaluating AI tools, monitoring implementation, and staying current with evolving professional standards and guidance.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Applying AI to Clinical Practice: Considerations, Barriers, and Opportunities — Alexandra Tomei · 1 BACB Ethics CEUs · $25
Take This Course →1 BACB Ethics CEUs · $25 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.