By Matt Harrington, BCBA · Behaviorist Book Club · Research-backed answers for behavior analysts
Using AI to assist with treatment plan drafting is not inherently unethical, but the manner of use determines whether it meets ethical standards. The BACB Ethics Code requires that treatment be individualized, evidence-based, and reflective of the practitioner's clinical reasoning. If a BCBA uses AI to generate a draft that is then thoroughly reviewed, modified to reflect the individual client's assessment data and needs, and signed only after the practitioner can stand behind every recommendation, this can be an ethical use of the technology. However, if the practitioner signs an AI-generated treatment plan without meaningful review and modification — essentially rubber-stamping the AI's output — this likely violates multiple ethical standards including those related to individualized treatment, evidence-based practice, and professional integrity. The key question is whether the final document reflects the practitioner's genuine clinical reasoning or whether it represents the AI's statistical pattern matching with the practitioner's signature attached.
Yes. The BACB Ethics Code's informed consent requirements mean that families should be made aware of the tools and methods used in their family member's treatment. AI tools that process client data, generate clinical documents, or contribute to treatment decisions are part of the service delivery process and should be disclosed. This disclosure should include what AI tools are being used, what they do, how the practitioner reviews and modifies their outputs, and how client data is handled by the AI system. Transparency about AI use also serves a practical purpose: it builds trust with families and demonstrates that the practitioner is thoughtfully integrating technology rather than hiding behind it. Families who understand the role AI plays in their treatment are better positioned to ask informed questions and to participate meaningfully in the clinical process.
The most significant risks include loss of clinical individualization when AI-generated content is used without adequate modification; documentation inaccuracy when AI-generated notes do not reflect actual clinical events; clinical deskilling when practitioners rely on AI rather than developing their own reasoning abilities; data privacy breaches when client information is processed by third-party AI systems; algorithmic bias that may systematically disadvantage certain client populations; and erosion of accountability when the source of clinical decisions becomes unclear. These risks are not hypothetical — they are already manifesting in healthcare settings where AI adoption has outpaced ethical and regulatory frameworks. Behavior analysts can learn from these experiences and proactively establish safeguards rather than waiting for harm to occur.
A structured evaluation should address five domains: clinical validity (has the tool been empirically validated for your specific use case?), data security (is the tool HIPAA-compliant, and where is client data stored and who can access it?), bias and fairness (has the tool been tested for equitable performance across demographic groups?), workflow integration (does the tool genuinely improve clinical practice or merely add complexity?), and accountability (can you maintain sufficient understanding and oversight of the tool's outputs to take genuine responsibility for them?). If a tool's developers cannot answer these questions satisfactorily, that is a significant red flag. The burden of proof should be on the tool to demonstrate its value and safety, not on the practitioner to prove it is harmful.
AI tools have genuine potential to reduce the administrative burden that contributes significantly to BCBA burnout. Documentation, scheduling, data entry, and insurance authorization processes consume substantial practitioner time that could otherwise be spent in direct clinical work. When AI tools handle these administrative tasks effectively and the practitioner maintains oversight, the result can be a better allocation of professional time without compromising care quality. The critical safeguard is maintaining the distinction between administrative tasks (where AI can handle much of the work with practitioner review) and clinical tasks (where AI should support but not replace practitioner judgment). A BCBA who uses AI to draft session notes and then reviews them for accuracy is using the technology well. A BCBA who uses AI to determine clinical priorities without independent analysis is not.
As of this writing, the BACB has not issued specific guidance on AI use in behavior analytic practice. However, the existing Ethics Code contains principles that are directly applicable. Code 1.05 (Boundaries of Competence) means that AI tools do not expand a practitioner's competence. Code 2.01 (Providing Effective Treatment) requires evidence-based practice — which means AI tools should be validated. Code 2.04 (Third-Party Involvement) addresses situations where external parties have access to client data. Code 3.01 (Behavior-Analytic Assessment) requires that assessment be conducted by qualified professionals, not delegated to algorithms. Practitioners should apply these existing principles to AI use while monitoring for any specific guidance the BACB may issue in the future. The absence of specific AI guidance does not create an ethical vacuum — the existing code provides a robust framework for responsible technology use.
AI bias in ABA is a concern because AI tools learn from historical data that may reflect existing disparities in service delivery. If an AI tool is trained on treatment plans from practices that historically provided less intensive services to certain demographic groups, the tool may perpetuate those disparities in its recommendations. Similarly, if the training data overrepresents certain diagnoses, age groups, or treatment approaches, the tool's outputs will reflect those imbalances. Behavior analysts should ask AI tool developers about their training data, bias testing procedures, and fairness metrics. In the absence of satisfactory answers, practitioners should monitor AI outputs for signs of differential performance across client populations and be prepared to override or discontinue tools that produce biased recommendations.
Yes, supervision models should evolve to address AI literacy and critical evaluation skills. Supervisors should teach supervisees how to evaluate AI tools critically, how to review AI-generated documents against actual clinical data, and how to maintain clinical reasoning skills even when AI tools are available. Supervision should include explicit discussion of when AI use is appropriate and when it is not, and supervisors should model the critical evaluation process during supervision sessions. There is also a risk that supervisees may use AI tools to complete supervision requirements without developing the underlying competencies those requirements are designed to build. Supervisors should be aware of this risk and structure supervision activities to ensure genuine skill development rather than AI-assisted task completion.
Social validity — the extent to which the goals, procedures, and outcomes of an intervention are acceptable to stakeholders — is directly relevant to AI adoption. Families, clients, and other stakeholders may have strong opinions about AI involvement in treatment. Some may appreciate the efficiency gains, while others may have concerns about privacy, impersonality, or the adequacy of AI-assisted clinical decisions. Behavior analysts should systematically assess stakeholder attitudes toward AI use, rather than assuming acceptance. This might involve discussing AI tools during the informed consent process, soliciting feedback after implementation, and being responsive to concerns. The field's commitment to social validity requires that stakeholder perspectives be actively sought and genuinely incorporated into decisions about AI adoption.
Organizations should develop written AI use policies that specify which AI tools are approved for use, what purposes they may serve, what review and oversight procedures are required, how client data privacy is protected, and how the organization will evaluate the ongoing appropriateness of AI tool use. These policies should be developed collaboratively with clinical leadership, informed by the BACB Ethics Code, and reviewed regularly as the technology and regulatory landscape evolve. Key policy elements should include mandatory practitioner review of all AI-generated clinical documents, informed consent procedures that disclose AI tool use to families, data security requirements for any AI tool that processes client information, training requirements for practitioners using AI tools, and accountability structures that make clear who is responsible when AI-assisted processes produce errors.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Ethics of AI in ABA — Laurie Bonavita · 1 BACB Ethics CEUs · $0
Take This Course →1 BACB Ethics CEUs · $0 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.