By Matt Harrington, BCBA · Behaviorist Book Club · April 2026 · 12 min read
Artificial intelligence is entering the field of applied behavior analysis at an accelerating pace, and behavior analysts must grapple with the ethical implications before adoption outpaces thoughtful evaluation. AI tools are being used or proposed for treatment plan generation, data analysis and visualization, insurance authorization documentation, session note writing, caregiver training content, and even direct clinical decision support. The speed of this adoption demands that the field establish clear ethical guardrails rather than reacting to problems after they emerge.
AI tools have the potential to reduce administrative burden, improve consistency in treatment planning, and free practitioners to spend more time in direct clinical work. These are genuine benefits that could improve outcomes for clients and reduce burnout among practitioners. However, these potential benefits must be weighed against real risks: AI-generated treatment plans that lack individualization, automated documentation that misrepresents the clinical process, algorithmic bias that systematically disadvantages certain populations, and the erosion of clinical reasoning skills when practitioners defer to AI outputs without critical evaluation.
The field of ABA is particularly well-positioned to evaluate AI tools critically because of its foundational commitment to empiricism. Behavior analysts are trained to demand evidence before adopting new interventions, to measure outcomes objectively, and to make data-based decisions. These same principles should guide the evaluation of AI tools. The question is not whether AI will be used in ABA — it already is — but whether the field will apply its own scientific standards to evaluating how, when, and under what conditions AI use is appropriate, effective, and ethical.
The insurance-funded context in which much ABA is delivered adds another layer of complexity. Treatment plans, progress notes, and authorization requests are legal documents that attest to the clinical reasoning of a specific practitioner. When AI generates these documents, questions arise about authorship, accuracy, and accountability. If an AI-generated treatment plan contains an inappropriate recommendation and harm results, the supervising BCBA bears full ethical and legal responsibility regardless of the AI's involvement in the drafting process.
Perhaps most importantly, the clients served by ABA — many of whom are children with developmental disabilities — represent a vulnerable population that deserves the highest standard of ethical protection. Any technology adopted in service delivery to this population must demonstrably improve or at minimum not compromise the quality of care. The novelty and efficiency of AI tools do not exempt them from this standard.
The integration of AI into healthcare and behavioral health is part of a broader technological transformation that has already reshaped fields such as radiology, pathology, and mental health counseling. In healthcare broadly, AI tools are used for diagnostic imaging analysis, clinical decision support, electronic health record optimization, and predictive modeling. The lessons learned in these fields — both successes and failures — provide valuable context for behavior analysts considering AI adoption.
Within ABA specifically, several categories of AI tools have emerged. Documentation tools use large language models to generate session notes, treatment plans, and insurance authorization narratives based on clinical data inputs. These tools promise to reduce the significant administrative burden that contributes to practitioner burnout — a real and pressing problem in the field. Data analysis tools offer automated graphing, trend detection, and phase change recommendations based on client data. Clinical decision support tools suggest intervention strategies based on client characteristics, target behaviors, and available evidence.
The rapid proliferation of these tools has outpaced the development of regulatory frameworks and professional guidelines. The BACB has not yet issued specific guidance on AI use in behavior analytic practice, though the existing Ethics Code contains principles that are directly applicable. Other professional organizations, including the American Psychological Association and the American Medical Association, have begun developing AI-specific guidelines that may serve as reference points for the behavior analysis community.
The social validity dimension of AI in ABA is an important but underexplored area. How do clients and families feel about AI involvement in their treatment? Preliminary surveys suggest that stakeholder attitudes are mixed — some families appreciate the potential for more comprehensive treatment plans and reduced wait times for documentation, while others express concern about the impersonal nature of AI-generated clinical content and the potential for reduced practitioner engagement. The field's commitment to social validity requires that these stakeholder perspectives be systematically assessed rather than assumed.
The venture capital investment flowing into ABA technology companies adds a commercial dimension to this conversation. Companies developing AI tools for ABA are incentivized to market their products as transformative and to minimize discussion of limitations. Behavior analysts evaluating these tools must look beyond marketing claims and apply the same critical evaluation they would apply to any clinical tool or intervention.
The clinical implications of AI in ABA span every aspect of service delivery, from assessment through treatment planning, implementation, and evaluation. Practitioners must understand both the capabilities and limitations of current AI tools to use them responsibly.
Treatment plan generation is perhaps the most common current application. Large language models can produce grammatically polished, comprehensive-sounding treatment plans in minutes. However, these documents are generated through pattern matching and statistical prediction, not clinical reasoning. An AI-generated treatment plan may include appropriate-sounding goals and procedures that are not actually tailored to the individual client's assessment results, functional analysis outcomes, or family priorities. The practitioner must review every AI-generated document against the actual clinical data and modify it to reflect genuinely individualized clinical reasoning.
Documentation assistance is another common application. Session notes, progress reports, and authorization narratives generated by AI can save practitioners significant time. However, the practitioner must ensure that AI-generated documentation accurately reflects what occurred during the session, the clinical observations made, and the reasoning behind any decisions. Signing an AI-generated document without thorough review constitutes a misrepresentation of the clinical process and could expose the practitioner to ethical and legal liability.
Data analysis is an area where AI tools may offer genuine value. Automated trend detection, outlier identification, and graphing can help practitioners identify patterns in large datasets that might be missed through visual inspection alone. However, reliance on automated analysis without understanding the underlying methodology can lead to erroneous clinical conclusions. Practitioners should understand how the AI tool analyzes data, what assumptions it makes, and where its recommendations diverge from what the practitioner's own analysis would suggest.
The risk of clinical deskilling is a concern that the field should take seriously. When practitioners consistently defer to AI for treatment planning, documentation, and data analysis, the clinical reasoning skills that these activities develop and maintain may atrophy. New BCBAs who enter the field with AI tools already in place may never fully develop these skills. The field should consider how supervision and training models need to evolve to ensure that AI tools supplement rather than replace the development of clinical competence.
Bias in AI systems is a well-documented concern across healthcare. AI tools trained on historical data may perpetuate existing biases in service delivery — for example, recommending less intensive or less individualized treatment for clients from certain demographic groups if the training data reflects historical disparities in service provision. Behavior analysts should evaluate AI tools for potential bias and avoid using tools that cannot demonstrate fairness across the populations they serve.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
The BACB Ethics Code, while not written specifically to address AI, contains several principles that provide clear guidance for behavior analysts navigating this terrain.
Code 1.05 (Boundaries of Competence) requires that behavior analysts practice only within their areas of competence. Using an AI tool does not expand a practitioner's competence. If a BCBA uses an AI tool to generate a treatment plan that includes strategies outside their training or experience, the practitioner is still responsible for ensuring they can competently implement and supervise those strategies. The tool's ability to generate the text does not confer clinical competence on the user.
Code 2.01 (Providing Effective Treatment) requires evidence-based practice. AI tools themselves have not been subjected to the kind of rigorous empirical evaluation that behavior analysts expect of clinical interventions. The fact that an AI tool produces output that looks professional does not constitute evidence that it produces output that improves client outcomes. Practitioners should seek evidence that specific AI tools have been validated for their intended use — and should be cautious when such evidence does not exist.
Code 2.04 (Third-Party Involvement in Services) is relevant when AI tools are developed and maintained by commercial third parties. These companies have access to client data, which raises confidentiality concerns. Practitioners must ensure that any AI tool they use complies with applicable privacy laws and that clients and families are informed about how their data will be processed by the AI system.
Code 2.18 (Providing Appropriate Credit) raises questions about intellectual honesty in AI-assisted documentation. When a treatment plan is substantially generated by AI, does the practitioner's signature accurately represent the authorship? This is not merely an academic question — it has implications for accountability, insurance billing, and professional integrity.
The concept of informed consent deserves particular attention. Families have the right to know when AI tools are being used in their family member's treatment. This includes understanding what AI tools are being used, what role they play in the clinical process, how the practitioner reviews and modifies AI output, and how client data is handled by the AI system. Transparency about AI use builds trust and respects the family's right to make informed decisions about their care.
Finally, behavior analysts have an ethical obligation to evaluate new tools and technologies critically rather than adopting them based on marketing claims, peer pressure, or the assumption that newer technology is inherently better. The field's scientific foundation demands that AI tools be subjected to the same empirical scrutiny as any other tool in the practitioner's repertoire.
Before adopting any AI tool in clinical practice, behavior analysts should conduct a structured evaluation that addresses several domains. The first is clinical validity: Has the tool been evaluated for accuracy in the specific clinical context where it will be used? An AI tool that generates accurate session notes for discrete trial training may perform poorly for naturalistic developmental behavioral interventions. Domain-specific validation is essential.
The second domain is data security and privacy. What data does the tool collect? Where is it stored? Who has access? Is the data used to train or improve the AI model? Are the security practices compliant with HIPAA and any applicable state privacy laws? These questions should be answered before any client data enters the system, and the answers should be documented.
The third domain is bias and fairness. Has the tool been evaluated for bias across demographic groups? Does it perform equally well for clients of different ages, diagnoses, cultural backgrounds, and service settings? If the tool's developers cannot provide evidence of fairness testing, practitioners should proceed with caution and monitor for differential performance.
The fourth domain is clinical workflow integration. How does the tool fit into existing clinical processes? Does it create efficiencies that genuinely free up practitioner time for direct clinical work, or does it add complexity that offsets the time savings? Does it support clinical reasoning or replace it? The best AI tools are those that present information and options while leaving clinical decisions firmly in the hands of the practitioner.
The fifth domain is accountability and liability. If an AI tool contributes to a clinical error, who is responsible? The answer, both ethically and legally, is the supervising BCBA. This means practitioners must maintain sufficient understanding of their AI tools to take genuine ownership of the outputs. If a practitioner cannot explain why a particular recommendation appears in an AI-generated treatment plan, they should not include it.
Ongoing evaluation after adoption is equally important. Practitioners should monitor the accuracy of AI outputs over time, track whether AI-assisted processes produce better or equivalent client outcomes compared to non-AI processes, and be prepared to discontinue use of a tool that does not meet expectations. The same data-based decision-making that guides clinical practice should guide technology adoption.
AI tools in ABA should be evaluated using the same empirical standards the field applies to clinical interventions — demand evidence of validity, measure outcomes, and make data-based decisions about continued use. Every AI-generated document — treatment plans, session notes, authorization narratives — must be thoroughly reviewed by the responsible BCBA against actual clinical data before signing, as the practitioner bears full ethical and legal responsibility for the content regardless of how it was drafted.
Informed consent should include transparent disclosure to families about what AI tools are used in their family member's treatment, what role they play, and how client data is handled. Data security and privacy must be verified before any client information enters an AI system — HIPAA compliance, data storage practices, and third-party access should be documented. Clinical reasoning skills must be actively maintained and developed, particularly in newer practitioners, to prevent the deskilling that can occur when AI tools handle tasks that would otherwise build clinical competence.
Bias evaluation should be conducted before and during AI tool use, with particular attention to whether the tool performs equitably across the diverse populations served in ABA. The administrative burden reduction that AI tools promise is a genuine benefit that should be pursued — but not at the cost of clinical individualization, ethical integrity, or accountability. Practitioners should advocate within their organizations for thoughtful, policy-guided AI adoption rather than unregulated use driven by marketing claims or competitive pressure.
Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.
Ethics of AI in ABA — Laurie Bonavita · 1 BACB Ethics CEUs · $0
Take This Course →All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.