These answers draw in part from “Exploring the Use and Implications of Artificial Intelligence in the Practice of Behavior Analysis” by Sara Gershfeld, BCBA (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →The BACB Ethics Code (2022) does not explicitly address artificial intelligence or machine learning. However, its general principles and standards provide a framework for navigating AI-related ethical issues. Core Principle 2.01 (Providing Effective Treatment) requires reliance on scientific evidence, which extends to evaluating the evidence base for AI tools. Core Principle 2.06 (Maintaining Confidentiality) applies to data processed by AI systems. Core Principle 1.05 (Practicing within Scope of Competence) requires that practitioners understand the tools they use. Behavior analysts must extrapolate from these general provisions to develop their own ethical framework for AI use pending more specific guidance.
AI tools in ABA span several categories. Data analysis tools use machine learning to identify patterns in behavioral data and predict skill acquisition trajectories. Documentation tools use natural language processing to generate session notes from audio recordings or structured inputs. Treatment recommendation engines suggest goals or interventions based on population-level outcome data. Computer vision systems are being explored for automated behavior coding from video. Administrative tools automate scheduling, insurance authorization, and billing processes. The evidence base for these tools varies significantly, and practitioners should evaluate each tool individually rather than assuming all AI products deliver on their marketing claims.
Behavior analysts must conduct due diligence on any AI tool that processes client data. This includes understanding where data is transmitted and stored, who has access, whether data is used to train the vendor's AI models, and what data retention and deletion policies apply. HIPAA compliance is a minimum requirement but may not address all AI-specific risks. Practitioners should review vendor privacy policies and terms of service, ask vendors direct questions about their data practices, and consider whether clients need to provide specific consent for AI processing of their data. The BACB Ethics Code (2022) section 2.06 on confidentiality applies to all methods of data handling, including AI systems.
Algorithmic bias occurs when an AI system produces systematically unfair or inaccurate outputs for certain groups of people. This typically arises when the data used to train the AI reflects existing disparities in service delivery, such as under-representation of certain racial, linguistic, or socioeconomic groups. In ABA, algorithmic bias could result in treatment recommendations that are less effective for underrepresented populations, documentation that mischaracterizes culturally influenced behaviors, or progress predictions that are systematically less accurate for some clients. The BACB Ethics Code (2022) requirement for cultural responsiveness (1.07) extends to evaluating whether the tools you use produce equitable results.
No. AI can augment clinical judgment by processing data more quickly, identifying patterns humans might miss, and automating routine tasks. However, clinical judgment involves integrating quantitative data with contextual knowledge about the individual client, their family, their culture, their environment, and their values. AI systems lack this contextual understanding. The BACB Ethics Code (2022) places responsibility for clinical decisions squarely on the behavior analyst. Regardless of what an AI system recommends, the practitioner who acts on that recommendation bears professional and ethical responsibility for the outcome. AI should be treated as one input among many, not as an authoritative source of clinical decisions.
Informed consent documents and conversations should be updated to include specific information about any AI tools used in assessment, treatment planning, data analysis, or documentation. Clients and families should be informed about what tools are being used, what data those tools process, how AI-generated outputs influence clinical decisions, and their right to ask questions or opt out. This information should be presented in accessible language appropriate to the family's linguistic and educational background. Consent should be revisited when new AI tools are introduced or when existing tools are updated in ways that affect how client data is handled.
Trust your clinical judgment. AI recommendations are generated from population-level data and algorithms that cannot account for the unique contextual factors you know about your client. When an AI recommendation conflicts with your assessment of the situation, document your reasoning for the alternative course of action, noting the AI recommendation and your rationale for departing from it. This documentation protects both you and your client. If the discrepancy is significant or recurring, consider whether the AI tool is appropriate for this client population or whether the tool needs recalibration. Consult with colleagues or supervisors when the decision is particularly complex.
AI-generated documentation can be ethical if the behavior analyst reviews all outputs for accuracy before signing or submitting them. The BACB Ethics Code (2022) section 2.13 requires accuracy in billing and reporting. If a clinician signs AI-generated notes without careful review, they bear responsibility for any inaccuracies, including factual errors, mischaracterized behaviors, or fabricated details. Establish a review protocol that includes reading every AI-generated document against your own observations and session data before approval. Never sign documentation you have not verified. The time savings from AI documentation are only beneficial if the documentation accurately represents what occurred.
Building AI competence does not require becoming a computer scientist. Start with foundational concepts such as how machine learning models are trained, what training data is, how algorithms can produce biased outputs, and what the limitations of AI predictions are. Professional development workshops, webinars, and continuing education courses focused on AI in healthcare or behavioral science are increasingly available. Read critically about AI implementations in related fields such as psychology and education. When evaluating a specific tool, ask vendors for technical documentation, independent validation studies, and transparent descriptions of their algorithms. Consult with colleagues who have technical backgrounds when needed.
Over-dependence on AI tools carries several risks. Clinical judgment skills may atrophy if practitioners routinely defer to algorithmic recommendations rather than engaging in independent analysis. New practitioners may fail to develop strong data analysis and critical thinking skills if AI systems handle these tasks from the start. Reliance on AI-generated documentation may reduce practitioners' ability to write accurate, nuanced clinical narratives. If an AI system goes offline, is discontinued by its vendor, or produces inaccurate outputs, practitioners who are dependent on it may be unable to maintain quality of care. Maintaining your independent clinical skills alongside AI tool use is essential for resilient, ethical practice.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Exploring the Use and Implications of Artificial Intelligence in the Practice of Behavior Analysis — Sara Gershfeld · 1 BACB Ethics CEUs · $20
Take This Course →We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
279 research articles with practitioner takeaways
258 research articles with practitioner takeaways
252 research articles with practitioner takeaways
1 BACB Ethics CEUs · $20 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.