By Matt Harrington, BCBA · Behaviorist Book Club · Research-backed answers for behavior analysts
Current AI applications in ABA include automated session note generators that convert structured data into narrative documentation, data analysis tools that identify trends and calculate effect sizes, treatment plan drafting assistants that suggest goals and interventions based on client profiles, prior authorization support tools that help generate and track authorization requests, scheduling optimization systems, and training content generators. Some practice management platforms are integrating AI features directly into their existing tools. The range of available tools is expanding rapidly, though the evidence base for their clinical validity varies significantly across products.
The BACB Ethics Code (2022) does not specifically mention artificial intelligence. However, its principles are directly applicable to AI use. Code 1.05 (Practicing Within Scope of Competence) requires understanding the tools you use. Code 2.01 (Providing Effective Treatment) requires that clinical decisions be evidence-based regardless of whether AI is involved. Code 2.06 (Maintaining Confidentiality) applies to data shared with AI systems. Code 2.11 (Obtaining Informed Consent) implies that families should be informed about AI involvement in their care. Behavior analysts should apply these existing ethical standards to AI-related decisions until more specific guidance is developed.
Primary risks include clinical errors from AI-generated content that is not adequately reviewed, confidentiality breaches from data shared with AI vendors, algorithmic bias that produces inequitable recommendations for certain populations, over-reliance on AI that erodes practitioner clinical judgment, loss of the personal therapeutic relationship when AI automates communication, and regulatory compliance issues when AI data practices do not align with HIPAA or state privacy requirements. Additionally, there is the risk that AI tools may produce outputs that appear authoritative but are based on flawed or limited training data, leading to treatment decisions that are not grounded in the best available evidence.
Evaluate the AI vendor's data practices by reviewing their privacy policy, terms of service, and HIPAA compliance documentation. Key questions include whether the vendor signs a Business Associate Agreement under HIPAA, where client data is stored and whether it is encrypted, whether the vendor uses client data to train its AI models, who has access to the data and under what conditions, what happens to data if you discontinue the service, and whether the vendor has experienced data breaches. Consult with your organization's compliance officer or legal counsel if you are unsure about a tool's data practices. When in doubt, do not use an AI tool with identifiable client data.
Yes. Transparency about AI use is both an ethical obligation and a practical necessity for maintaining family trust. Families should be informed about which aspects of their child's care involve AI, what role AI plays, how their data is handled by AI systems, and what safeguards are in place to ensure accuracy and privacy. This disclosure should be incorporated into the informed consent process. Families should also have the option to request that AI not be used in their child's care, and practitioners should be prepared to accommodate this request. Transparent communication about AI demonstrates respect for families' autonomy and builds trust.
Algorithmic bias occurs when an AI system produces systematically unfair or inaccurate outputs for certain groups, typically because the training data did not adequately represent those groups. In ABA, this could manifest as treatment recommendations that are less appropriate for clients from underrepresented racial, ethnic, or socioeconomic backgrounds, assessment tools that perform differently across populations, or documentation tools that use language reflecting implicit biases. Behavior analysts should be aware that AI outputs may contain biases and should critically evaluate all AI-generated content for fairness and appropriateness across their diverse client populations.
No. AI cannot replace clinical judgment because treatment planning requires integrating behavioral data with contextual knowledge about the client, their family, their cultural background, their trauma history, their preferences, and dozens of other factors that AI systems cannot fully capture. AI can provide useful information to support treatment planning, such as data trends, literature summaries, or draft recommendations, but the behavior analyst must evaluate this information through the lens of their clinical expertise and their knowledge of the individual client. The BACB Ethics Code (2022) places responsibility for clinical decisions on the behavior analyst, not on any tool or system.
Essential safeguards include requiring human review and approval of all AI-generated clinical content before it is used in practice, establishing clear data governance policies that ensure HIPAA compliance, training all staff who interact with AI tools on proper use and limitations, creating protocols that define when AI can and cannot be used, implementing regular auditing of AI outputs for accuracy and bias, maintaining the ability to function without AI tools in case of system failures, providing families with transparent information about AI use and the option to opt out, and assigning responsibility for AI oversight to specific individuals or committees within the organization.
AI can affect the therapeutic relationship both positively and negatively. On the positive side, AI that reduces administrative burden frees behavior analysts to spend more time in direct interaction with families, potentially strengthening relationships. On the negative side, if families perceive that AI is replacing the personal attention of their behavior analyst, trust may be eroded. AI-generated communications that lack the warmth and personalization of human-written messages may feel impersonal. The key to preserving therapeutic relationships is transparency about AI use, ensuring that high-value interactions such as progress discussions and treatment planning remain personally delivered, and maintaining the human elements of care that families value most.
If you have concerns about an AI tool your organization has adopted, document your specific concerns including the ethical principles you believe are at risk. Communicate these concerns to your supervisor or the appropriate organizational leader, framing them in terms of the BACB Ethics Code (2022) and client welfare. Propose alternatives or modifications that would address your concerns, such as additional safeguards, training, or client disclosure protocols. If your concerns are not addressed through internal channels, consult with colleagues, professional organizations, or the BACB for guidance. Your ethical obligation to your clients takes precedence over organizational directives when client welfare is at stake.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Smart Tech, Smarter Care: Empowering Clinicians, Elevating Care, and Shaping Ethical Practices — Tim Fuller · 1 BACB Ethics CEUs · $30
Take This Course →1 BACB Ethics CEUs · $30 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.