By Matt Harrington, BCBA · Behaviorist Book Club · Research-backed answers for behavior analysts
Using AI tools can be ethical when appropriate safeguards are in place, but it requires careful consideration of confidentiality, accuracy, and individualization. The BACB Ethics Code (2022) does not prohibit AI use but establishes standards that constrain how it should be used. Key requirements include protecting client confidentiality by not entering identifying information into non-HIPAA-compliant platforms, verifying all AI-generated content for accuracy, ensuring treatment recommendations reflect individualized assessment data, and maintaining transparency about AI use. Ethical AI use means using it as an efficiency tool while preserving professional judgment and clinical integrity.
The primary confidentiality risk is that client-identifying information entered into AI platforms may be stored, processed, or used by the platform provider in ways that violate HIPAA and the BACB Ethics Code. Many commercial AI platforms retain submitted data, potentially use it for model training, and may be subject to data breaches. Even platforms that claim not to store data may have temporary storage practices. The risk extends beyond obvious identifiers like names to include any combination of details that could identify a client, such as diagnosis, age, location, and specific behavioral descriptions. BCBAs must either use HIPAA-compliant platforms or completely de-identify all information before submission.
AI can assist with drafting treatment plans and progress reports, but the final product must reflect individualized clinical judgment based on actual client data. Using AI to generate a first draft of a report structure or template language for standard procedures is relatively low risk if confidentiality safeguards are in place. However, the BCBA must review every element of the document against actual session data, assessment results, and clinical observations. AI-generated clinical descriptions, progress summaries, and treatment recommendations that have not been verified against real client records may contain fabricated or inaccurate information that constitutes inaccurate reporting.
A comprehensive AI use policy should address approved AI platforms and their compliance status, prohibited uses such as entering identifiable client information into non-compliant systems, required verification procedures for AI-generated content, transparency expectations for clients and families, documentation requirements for AI-assisted work products, supervisory oversight of AI use by supervisees and RBTs, training requirements for staff using AI tools, and procedures for updating the policy as technology and guidance evolve. The policy should be reviewed and updated at least annually and should be developed with input from clinical, administrative, and legal perspectives.
Verifying AI-generated clinical content requires comparing every assertion against your actual knowledge and records. Check that described behaviors match actual client presentations, that assessment results cited are real and accurate, that intervention recommendations are supported by the professional literature, and that progress descriptions align with actual data. Be particularly alert for AI hallucinations, which are fabricated details presented with apparent authority, including invented assessment scores, non-existent research citations, and clinical observations that never occurred. If you cannot verify a specific claim, remove or replace it with verified information.
Using AI for supervision raises several concerns. AI-generated feedback cannot replace direct observation of supervisee performance. Supervision sessions built around AI-generated competency assessments rather than actual observed behavior fail to provide individualized guidance. AI may generate advice that sounds reasonable but does not address the specific supervisee's learning needs. The supervisory relationship, which involves mentoring, modeling, and professional development, cannot be replicated by technology. Supervisors should use AI to assist with administrative aspects of supervision such as organizing data and preparing agendas while maintaining direct clinical oversight and relationship-based mentoring.
While current BACB guidance does not explicitly require AI disclosure, the principles of informed consent and transparent communication strongly support it. Families may have preferences about whether AI is used in their child's care, and they deserve the opportunity to express those preferences. Disclosure builds trust and respects the family's right to make informed decisions about services. A simple, honest explanation, such as noting that you use AI tools to assist with administrative tasks while all clinical decisions are made by you based on your professional assessment, is sufficient for most situations.
AI is likely to increasingly handle routine administrative tasks, data processing, and template-based documentation, potentially freeing BCBAs to focus more time on direct clinical activities, relationship building, and complex clinical decision-making. However, this shift will require BCBAs to develop new competencies in AI literacy, data verification, and ethical technology use. The core BCBA competencies of individualized assessment, clinical judgment, empathic engagement, and ethical reasoning will remain essential and may become more valued as AI handles routine tasks. BCBAs who adapt thoughtfully to AI will likely find their roles enhanced rather than diminished.
Over-reliance on AI for clinical decisions can lead to generic treatment approaches that fail to account for individual client variables, undetected inaccuracies in AI-generated recommendations, erosion of clinical reasoning skills through disuse, reduced engagement with the assessment data that should drive decisions, and a false sense of confidence in AI-generated output. Perhaps most concerning, AI over-reliance can create a practice pattern where the BCBA functions as an editor of AI output rather than an independent clinical thinker. This fundamentally changes the role from professional practitioner to content reviewer and undermines the individualized, data-driven approach that defines competent behavior analysis.
Before adopting any AI tool, BCBAs should evaluate its data handling practices including where data is stored, who has access, and whether the platform is HIPAA-compliant. Review the terms of service for provisions about data use and retention. Assess the tool's accuracy by testing it with known information and checking outputs carefully. Consider whether the tool addresses a genuine need in your practice or merely adds novelty. Consult with colleagues who have experience with the tool and seek organizational approval before implementation. Start with low-risk, non-clinical applications and expand use only after building confidence in the tool's reliability and your verification procedures.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Integrating Artificial Intelligence and ABA Services: Ethical Considerations for Today's Provider — Rebecca Womack · 1 BACB Ethics CEUs · $20
Take This Course →1 BACB Ethics CEUs · $20 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.