These answers draw in part from “Ethical and Effective Use of AI/ML in Autism Service Provision” by Adam Hahs, PhD, BCBA-D, LBA (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →Current applications span several domains: automated behavioral coding from video using computer vision, data analysis tools that identify patterns in behavioral data, treatment recommendation systems that suggest interventions based on client profiles, progress monitoring dashboards with automated trend detection, natural language processing for analyzing session notes or communication samples, screening and diagnostic support tools that analyze developmental data, and administrative tools for scheduling and resource optimization. The maturity and evidence base for these applications varies significantly, with some having robust validation and others being in early development. Practitioners should evaluate each application individually rather than assuming that all AI tools are equally reliable.
The CASP Values Statement is a document that identifies key principles and considerations for the ethical integration of AI and ML into autism service provision. It establishes values including transparency, clinical engagement, individualization, protection, accountability, balanced approach, and commitment to excellence. It matters for behavior analysts because it provides a framework for making decisions about technology adoption that goes beyond the general ethical principles in the BACB Ethics Code. The statement acknowledges that AI and ML technologies present unique ethical challenges that require specific guidance, and it offers practical principles that can inform organizational policy development and individual clinical decision-making.
Informed consent for AI and ML use should be specific and comprehensive. Clients and families should be told which technologies are being used, what data they collect and process, how their outputs inform clinical decisions, who has access to the data (including any third-party technology vendors), how data are stored and protected, and the client's right to decline technology use without losing access to services. This information should be presented in accessible language and revisited when new technologies are introduced or existing ones are modified. Consent should be ongoing rather than one-time, consistent with Code 2.11. Practitioners should document that informed consent was obtained and what information was provided.
Algorithmic bias occurs when AI and ML systems produce systematically different results for different demographic groups due to biases in the training data or algorithm design. In autism services, this could manifest as screening tools that are less accurate for girls, individuals from minority backgrounds, or those with co-occurring conditions. Treatment recommendation systems trained primarily on data from one demographic may produce inappropriate recommendations for clients with different profiles. Bias can also arise from underrepresentation of certain populations in training datasets. Practitioners should inquire about how AI tools were validated and for which populations, and should be particularly cautious when using tools with clients whose demographics differ from the validation sample.
No. AI and ML tools can augment clinical judgment by processing large amounts of data, identifying patterns, and generating recommendations, but they cannot replace the nuanced, individualized reasoning that characterizes skilled clinical practice. AI systems lack the ability to understand the full context of a client's situation, including family dynamics, cultural factors, personal history, and the subtleties of the therapeutic relationship. The CASP Values Statement explicitly emphasizes clinical engagement, meaning that human practitioners must remain actively involved in all clinical decisions. AI outputs should be treated as one input to the decision-making process, not as authoritative recommendations that override clinical judgment.
A comprehensive organizational policy should address several areas: criteria for evaluating and approving AI and ML tools before adoption, informed consent procedures specific to technology use, data governance including collection, storage, access, and retention protocols, staff training requirements for using AI and ML tools, supervision and monitoring procedures for technology-assisted clinical activities, protocols for addressing technology failures or data breaches, vendor evaluation criteria including data security and algorithmic transparency requirements, procedures for reporting and investigating technology-related concerns, and regular review schedules for evaluating the continued appropriateness of adopted technologies. The policy should be developed with input from clinical staff, technology specialists, and ethics consultants.
When AI recommendations conflict with clinical judgment, the practitioner's assessment should take precedence. AI systems generate recommendations based on patterns in data, but they cannot account for all the contextual factors that a skilled clinician considers. The practitioner should document the AI recommendation, their clinical assessment, the reasons for the discrepancy, and the rationale for their final clinical decision. This documentation protects the practitioner and creates a record that can be used to evaluate the AI system's accuracy over time. If conflicts occur frequently, this may indicate that the AI tool is not well-suited to the clinical context or population being served.
Critical data security considerations include encryption of data both in transit and at rest, access controls limiting who can view client data within AI systems, vendor security certifications and compliance with relevant regulations such as HIPAA, data minimization practices that limit collection to what is necessary, clear data retention and deletion policies, protocols for responding to data breaches, restrictions on using client data for purposes beyond the agreed-upon clinical application (such as algorithm training), and regular security audits of all technology systems. Organizations should evaluate the security practices of technology vendors as thoroughly as they evaluate the clinical utility of their products.
Evaluation should examine several dimensions: the peer-reviewed research supporting the tool's effectiveness for the intended application, the populations on which the tool was validated and whether they match your client population, the tool's accuracy metrics including sensitivity, specificity, and error rates, independent evaluations not conducted by the tool's developers, the transparency of the algorithm and the ability to understand how it generates outputs, real-world implementation data from similar clinical settings, and ongoing monitoring data showing continued performance over time. Tools that lack published, peer-reviewed evidence should be used with extreme caution and only with appropriate informed consent and monitoring.
Clients and families should be informed partners in decisions about AI and ML use in their care. This includes receiving clear information about what technologies are being considered, having the opportunity to ask questions and express concerns, providing informed consent that specifically addresses technology use, having the right to decline specific technologies without consequence to their services, receiving ongoing updates about how technology is being used and what it reveals about their care, and having access to mechanisms for reporting concerns about technology-related issues. For clients who cannot directly participate in these decisions, families and legal representatives should be fully engaged, and the client's known preferences and best interests should guide decision-making.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Ethical and Effective Use of AI/ML in Autism Service Provision — Adam Hahs · 1 BACB Ethics CEUs · $30
Take This Course →We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
258 research articles with practitioner takeaways
224 research articles with practitioner takeaways
205 research articles with practitioner takeaways
1 BACB Ethics CEUs · $30 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.