These answers draw in part from “Pause Before Proceeding: Ethical Considerations Around the Clinical Use of Artificial Intelligence (AI) and Machine Learning (ML)” by Rebecca Womack, MS, BCBA, LBA (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →The recommendation to pause reflects a precautionary approach to technologies that can directly influence clinical decisions affecting vulnerable populations. AI tools in ABA may affect treatment recommendations, data interpretation, and the therapeutic relationship. Unlike administrative tools that primarily affect efficiency, clinical AI tools create risks including algorithmic bias, erosion of clinical judgment, data privacy exposure, and accountability ambiguity. The BACB Ethics Code (2022) requires competent, evidence-based, and individually appropriate practice. Pausing allows practitioners to evaluate whether specific AI tools meet these requirements before client care is affected. This is not anti-technology; it is pro-responsibility.
The Ethics Code's principles are written broadly enough to apply to emerging technologies including AI. Core Principle 1.05 (Practicing within Scope of Competence) applies to the practitioner's ability to evaluate AI tools. Core Principle 2.01 (Providing Effective Treatment) requires that treatment decisions be evidence-based, which extends to evaluating the evidence for AI tools. Core Principle 2.06 (Maintaining Confidentiality) applies to data processed by AI systems. Core Principle 2.13 (Accuracy in Billing and Reporting) applies to AI-generated documentation. The absence of explicit AI language does not mean the Code is silent; practitioners must apply general principles to specific technological contexts.
Machine learning models identify patterns in aggregate data and generate recommendations based on what has worked for similar cases. This fundamentally tensions with ABA's commitment to individualized, single-subject design approaches. A model trained on thousands of cases may recommend an intervention that was effective on average but is inappropriate for a specific client due to unique cultural factors, co-occurring conditions, environmental variables, or family preferences that the model cannot capture. Practitioners who defer to ML recommendations without applying their individualized knowledge of the client risk providing population-level care to individuals who need personalized attention.
Look for peer-reviewed research conducted independently of the tool's developer. Examine whether the tool has been validated with populations similar to your clients in terms of age, diagnosis, cultural background, and functional level. Review the tool's documented error rates and determine whether those rates are acceptable for clinical use. Ask the vendor whether the tool has been tested in real clinical settings or only in controlled research environments. Be skeptical of marketing materials that cite only internal company data or anecdotal testimonials. If no independent validation exists, consider the tool experimental and proceed with heightened caution and additional safeguards.
Clinical judgment develops through the iterative process of analyzing data, forming hypotheses, testing interventions, observing outcomes, and refining understanding over thousands of clinical interactions. When AI tools perform data analysis and generate recommendations, less experienced practitioners may not develop the foundational reasoning skills this process builds. They may accept AI outputs uncritically because they lack the confidence or experience to question them. Over time, this creates practitioners who are dependent on technology for clinical reasoning rather than using technology to enhance reasoning they have independently developed. Supervisors should ensure trainees develop strong independent analysis skills before introducing AI assistance.
Essential questions include: Where is client data stored and in what jurisdiction? Who within the vendor organization has access to client data? Is client data used to train or improve the vendor's AI models? What encryption methods are used during data transmission and storage? What happens to client data if I discontinue the service? What happens to client data if the vendor is acquired, goes bankrupt, or changes ownership? Does the vendor have a HIPAA Business Associate Agreement? Has the vendor undergone independent security audits? Can data be deleted upon request? Unsatisfactory answers to any of these questions should give you serious pause about adoption.
Communication should be transparent, accessible, and ongoing. Explain in non-technical language what AI tools you use, what they do, what data they process, and how their outputs influence clinical decisions. Emphasize that a qualified human professional reviews all AI outputs and makes final clinical decisions. Provide families with the opportunity to ask questions and to decline AI involvement in their child's care. Include AI-related information in your informed consent documents and revisit the conversation when new tools are introduced or existing tools change. Families have a right to understand every component of their child's care, including the technological components.
Algorithmic bias occurs when an AI system produces systematically unfair outputs for certain demographic groups, typically because the training data reflects existing societal or clinical biases. In ABA, this could manifest as treatment recommendations that are less effective for clients from underrepresented racial or cultural groups, diagnostic predictions that are less accurate for certain populations, or documentation tools that mischaracterize culturally influenced behaviors. Because these biases are embedded in the algorithm rather than expressed explicitly, they can be difficult to detect without deliberate monitoring. The BACB Ethics Code (2022) requirement for cultural responsiveness (1.07) creates an obligation to evaluate tools for equitable performance across populations.
Yes. The BACB Ethics Code (2022) places responsibility for clinical decisions and documentation on the behavior analyst, regardless of what tools were used to inform those decisions. If an AI system produces an erroneous treatment recommendation that you implement, you bear professional and ethical responsibility for the outcome. If an AI system generates inaccurate documentation that you sign, you bear responsibility for the inaccuracies. There is no ethical or legal framework that transfers clinical responsibility from a licensed practitioner to a technology tool. This makes careful review of all AI outputs a non-negotiable requirement of ethical practice.
The course recommends several practices: conduct a thorough ethical evaluation before adopting any AI tool, including evidence review, data privacy assessment, and stakeholder impact analysis. Update informed consent to address AI use. Maintain independent clinical skills by regularly analyzing data and making decisions without AI assistance. Monitor AI outputs for signs of bias or inaccuracy. Establish review protocols that ensure human oversight of all AI-generated recommendations and documentation. Seek ongoing professional development in AI literacy. Engage with professional organizations to contribute to the development of field-specific AI guidelines. Above all, maintain the disposition of pausing to think ethically before acting technologically.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Pause Before Proceeding: Ethical Considerations Around the Clinical Use of Artificial Intelligence (AI) and Machine Learning (ML) — Rebecca Womack · 1 BACB Ethics CEUs · $20
Take This Course →We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
279 research articles with practitioner takeaways
252 research articles with practitioner takeaways
239 research articles with practitioner takeaways
1 BACB Ethics CEUs · $20 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.