By Matt Harrington, BCBA · Behaviorist Book Club · April 2026 · 12 min read
Artificial intelligence has rapidly moved from a futuristic concept to a present-day reality in healthcare and human services. For behavior analysts delivering applied behavior analysis services, AI tools offer substantial potential to improve efficiency, reduce administrative burden, and enhance certain aspects of clinical decision-making. However, the integration of AI into ABA practice raises critical ethical questions that every practitioner must understand and address.
The clinical significance of this topic lies in the speed at which AI adoption is occurring. Behavior analysts are already using AI tools for tasks ranging from report writing and treatment plan development to data visualization and literature reviews. Many of these uses are informal, unstructured, and occurring without organizational policies or ethical frameworks to guide them. This unregulated adoption creates risks that the field must address proactively.
AI tools like large language models can dramatically improve work-related efficiencies. A BCBA who spends hours drafting progress reports can use AI to generate first drafts in minutes. Treatment plan templates can be produced quickly, literature on specific intervention approaches can be summarized rapidly, and parent-friendly explanations of behavioral concepts can be generated with minimal effort. These time savings are real and meaningful, potentially freeing BCBAs to spend more time in direct clinical activities.
However, the same tools that offer these benefits also present significant risks. AI-generated content may contain inaccurate information presented with confident authority. Treatment recommendations produced by AI may not reflect the individualized assessment data that should drive clinical decision-making. Client information entered into AI systems may be stored, processed, or shared in ways that violate confidentiality obligations. The boundary between AI assistance and AI replacement of clinical judgment is not always clear, and crossing that boundary compromises both ethical obligations and client outcomes.
The core challenge for behavior analysts is not whether to use AI but how to use it ethically. Complete avoidance of AI tools may become impractical as these technologies become embedded in clinical software, documentation systems, and organizational workflows. The question is how to establish safeguards that preserve the clinical integrity, individualization, and confidentiality that effective ABA services require.
Rebecca Womack's presentation addresses these concerns by helping behavior analysts identify the ways AI can improve efficiency while maintaining clinical quality, establishing ethical safeguards for AI integration, and evaluating the risks of over-reliance on AI automation. These competencies are increasingly essential as AI tools become more accessible and more powerful.
The rapid advancement of AI technology has outpaced the development of professional guidelines in most healthcare fields, and behavior analysis is no exception. While the BACB Ethics Code (2022) does not mention artificial intelligence specifically, its principles provide a robust framework for evaluating AI use. Understanding both the technology and the ethical framework is necessary for responsible integration.
AI in its current form encompasses a range of technologies with different capabilities and limitations. Large language models, the technology behind tools like ChatGPT and similar platforms, generate text by predicting likely word sequences based on training data. They are remarkably capable at producing coherent, professional-sounding text but have no understanding of the content they produce. They cannot verify facts, assess clinical appropriateness, or weigh the ethical implications of their output. This fundamental limitation is critical for behavior analysts to understand.
Other AI applications relevant to ABA include machine learning algorithms for data analysis, natural language processing for session notes, automated scheduling systems, and computer vision applications for behavioral observation. Each presents different opportunities and risks. Data analysis tools may identify patterns in treatment data that human reviewers miss, but they may also generate spurious correlations. Automated session notes may save time but may also fail to capture clinically important nuances.
The BACB Ethics Code (2022) provides several directly relevant standards. Code 2.01 (Providing Effective Treatment) requires that treatment decisions be based on the professional literature and individualized assessment. AI-generated treatment recommendations that are not grounded in the client's specific data fail this standard. Code 2.04 (Disclosing Confidential Information) requires protecting client information, which is potentially compromised when client data are entered into AI systems with unclear data handling practices.
Code 2.13 (Accuracy in Billing and Reporting) is relevant when AI is used to generate documentation. If AI produces inaccurate descriptions of services provided, session activities, or client responses, submitting these documents constitutes inaccurate reporting regardless of whether the inaccuracy was intentional. Code 1.05 (Practicing Within Scope of Competence) requires that behavior analysts understand the tools they use, which extends to understanding the capabilities and limitations of AI systems employed in their practice.
The legal landscape surrounding AI in healthcare is evolving rapidly. HIPAA regulations in the United States govern the handling of protected health information, and entering client data into AI platforms that are not HIPAA-compliant potentially constitutes a violation. Different AI platforms have different data handling practices, and the terms of service may grant the platform rights to use submitted data for training purposes, effectively sharing confidential information with the technology provider.
Organizational readiness for AI integration varies widely. Some ABA organizations have developed formal AI use policies, while many have no guidance at all. Individual practitioners may be using AI tools informally without organizational knowledge or oversight. This creates a fragmented landscape where AI use is widespread but poorly regulated, increasing the risk of ethical violations and clinical errors.
The clinical implications of AI integration in ABA services span the full range of professional activities, from assessment and treatment planning through documentation and supervision. Understanding where AI can add value and where it introduces risk is essential for responsible use.
Report writing and documentation represent one of the highest-value, lowest-risk applications of AI for behavior analysts. BCBAs spend substantial time on administrative documentation, and AI can significantly reduce this burden by generating first drafts of progress reports, treatment plans, and session summaries. However, AI-generated documentation must be thoroughly reviewed and edited to ensure accuracy, individualization, and clinical appropriateness. A first draft is not a final product, and the time savings from AI should be reinvested in careful review rather than simply accepted as complete.
Treatment planning assistance is a more complex application. AI can help BCBAs review literature on specific interventions, generate template treatment plan components, and organize assessment data into structured formats. However, treatment planning fundamentally requires individualized clinical judgment based on the specific client's assessment data, preferences, history, and context. AI tools cannot replicate this judgment, and over-reliance on AI-generated treatment recommendations risks producing generic plans that fail to address each client's unique needs.
Data analysis and visualization is an area where AI may offer genuine clinical benefits. Machine learning algorithms can process large datasets, identify patterns, and generate visualizations that support clinical decision-making. A BCBA monitoring data across multiple clients could use AI tools to flag unusual patterns or potential trends that warrant closer examination. However, these tools should inform rather than replace professional data analysis, as automated pattern detection may generate false positives or miss clinically meaningful patterns that require contextual understanding.
Supervision and training present both opportunities and risks. AI can help supervisors prepare for supervision meetings by summarizing supervisee performance data, generate competency-based training materials, and create practice scenarios for skill development. The risk lies in using AI as a substitute for direct observation, nuanced feedback, and the relationship-based mentoring that effective supervision requires. A supervision session built around AI-generated feedback rather than the supervisor's direct clinical observations fails to provide the individualized guidance supervisees need.
Client communication is an area requiring particular caution. AI can help draft parent-friendly explanations of behavioral concepts, create visual supports, or generate social stories. However, communications with clients and families must accurately represent the clinician's professional opinion and be tailored to the specific family. AI-generated communications that the BCBA has not carefully reviewed and personalized may contain inaccuracies, use inappropriate language for the audience, or fail to address the family's specific concerns.
The temptation to use AI for tasks that require professional judgment is likely to increase as the technology improves and time pressures mount. BCBAs must maintain clear boundaries between AI-assisted efficiency and AI-replaced clinical thinking. The distinction is not always obvious, making ongoing reflection and professional dialogue essential.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
The ethical considerations surrounding AI use in ABA practice are numerous and consequential. The BACB Ethics Code (2022) provides the framework for evaluating these concerns, even though it was written before the current generation of AI tools became widely available.
Confidentiality represents perhaps the most immediate ethical concern. Code 2.04 (Disclosing Confidential Information) requires behavior analysts to protect confidential client information. When a BCBA enters client names, identifying information, assessment data, or session details into an AI platform, they may be sharing confidential information with a third party. Most commercial AI platforms include terms of service that allow the company to process, store, and potentially use submitted data for model improvement. Even platforms that claim not to use submitted data for training may store it temporarily, and data breaches are always possible.
The practical implication is that BCBAs must either use AI platforms that are explicitly HIPAA-compliant and covered by appropriate business associate agreements, or ensure that all client-identifying information is removed from any content submitted to AI systems. This includes not just names and dates of birth but any combination of information that could identify a specific individual.
Accuracy and integrity in documentation are governed by Code 2.13 (Accuracy in Billing and Reporting). AI-generated reports, progress notes, and treatment plans may contain fabricated information, including invented assessment scores, treatment history, or clinical observations that never occurred. The AI produces text that sounds authoritative and specific, making fabrications difficult to detect without careful review against actual client records. A BCBA who submits AI-generated documentation without thorough verification risks filing inaccurate reports, which is an ethical violation regardless of intent.
The obligation to provide individualized treatment under Code 2.01 is challenged by AI tools that generate generic recommendations. When a BCBA asks an AI to draft a treatment plan for a child with autism who engages in self-injurious behavior, the AI produces a plan based on general patterns in its training data, not on the specific client's functional assessment, reinforcement history, or contextual variables. Using such output without substantial individualization based on actual client data fails the ethical standard of individualized treatment.
Code 2.15 (Interrupting or Discontinuing Services) has relevance in scenarios where AI tools replace human clinical oversight. If an organization uses AI to generate treatment modifications or reduce supervision hours, the resulting decrease in clinical oversight may constitute a reduction in service quality that requires disclosure to clients and stakeholders.
Transparency with clients and families about AI use is an ethical consideration that is not explicitly addressed in the current Ethics Code but follows from principles of informed consent and honest communication. Families may not know that their child's treatment plan was partially generated by AI, and they may have opinions about this use of technology in their child's care. Ethical practice suggests disclosing AI use and allowing families to provide informed consent.
The supervisory obligation under Code 2.08 extends to monitoring and guiding supervisees' use of AI. Supervisors who are unaware of how their supervisees are using AI cannot fulfill their oversight obligations. Developing organizational policies and supervisory practices that address AI use is an emerging supervisory responsibility.
Professional integrity under Code 1.04 is implicated when AI-generated work is presented as the practitioner's own without attribution. While the norms around AI attribution in clinical documentation are still developing, presenting AI-generated clinical opinions as one's own professional judgment raises questions about honesty and professional integrity.
Developing a systematic framework for evaluating whether and how to use AI in specific clinical contexts helps BCBAs make thoughtful decisions rather than defaulting to convenience or anxiety. The following decision-making process can guide practitioners through the key considerations.
The first question is whether the task is administrative or clinical. Administrative tasks like formatting documents, summarizing meeting notes, generating template language for standard procedures, and organizing data are generally lower-risk applications for AI assistance. Clinical tasks like interpreting assessment results, making diagnostic determinations, selecting intervention strategies, and evaluating treatment effectiveness require professional judgment that AI cannot replicate. The distinction is not always clean, as many tasks blend administrative and clinical elements, but it provides a useful starting framework.
The second question concerns confidentiality. Will using AI for this task require submitting any client-identifying information? If yes, is the AI platform HIPAA-compliant with an appropriate business associate agreement in place? If not, can the task be completed after removing all identifying information? If identifying information cannot be removed without compromising the task, AI use may not be appropriate regardless of the efficiency gains.
The third question addresses accuracy verification. Can the output be thoroughly checked against actual client records and professional knowledge? For report writing, this means comparing every AI-generated statement against session notes, assessment data, and treatment records. For literature summaries, this means verifying cited sources actually exist and accurately represent the cited findings. If the output cannot be adequately verified, the risk of submitting inaccurate information is too high.
The fourth question concerns individualization. Does the task require individualized clinical judgment based on specific client data? If yes, AI should serve as a starting point rather than a finished product. The BCBA must substantially modify AI output to reflect the individual client's assessment results, preferences, history, and context. If the final product is not meaningfully different from what AI produced, the individualization standard has not been met.
The fifth question addresses organizational policy. Does your organization have an AI use policy? If yes, does the proposed use comply with it? If no policy exists, this is an opportunity to advocate for one. Using AI in the absence of organizational guidance increases personal liability and may create inconsistencies across the organization.
Practical decision-making also involves evaluating the net benefit of AI use for each specific application. If using AI to draft a report saves 30 minutes but requires 25 minutes of verification and editing, the net benefit is modest. If AI generates a literature summary that requires checking every citation for accuracy, the verification process may take longer than doing the search manually. Honest assessment of actual time savings prevents the illusion of efficiency.
Finally, consider the professional development implications. Tasks that AI can perform are often tasks that develop clinical writing, critical thinking, and professional communication skills. Supervisees who rely heavily on AI for report writing may not develop the documentation skills they need for independent practice. Balancing AI efficiency with professional growth requires intentional decisions about when AI assistance supports development and when it replaces it.
Integrating AI into your ABA practice ethically and effectively requires a proactive, structured approach. The following recommendations provide a starting point for practitioners at any stage of AI adoption.
Develop a personal AI use policy if your organization does not have one. Define which tasks you will and will not use AI for, what safeguards you will implement to protect confidentiality, and how you will verify the accuracy of AI-generated content. Write this policy down and review it periodically as the technology and your understanding of it evolve.
Never enter client-identifying information into AI platforms that are not covered by HIPAA-compliant business associate agreements. This includes names, dates of birth, locations, provider names, and any combination of details that could identify a specific individual. When using AI for clinical writing, create de-identified versions of the information you need to process, and re-identify the output only after it has been generated.
Treat all AI output as a first draft that requires professional review. Read every word of AI-generated content against your actual knowledge and records. Check facts, verify that clinical descriptions match actual client presentations, and ensure that recommendations align with assessment data. Never submit AI-generated documentation without this review process.
Be transparent about AI use with supervisors, colleagues, and when appropriate, with clients and families. The norms around AI disclosure are still developing, but erring on the side of transparency protects your professional integrity and supports informed consent.
Stay informed about both AI technology and the evolving ethical guidance from the BACB and other professional organizations. The field is moving quickly, and the guidance available today will continue to develop. Engaging with this evolving landscape is part of your ongoing professional responsibility.
Finally, maintain perspective on what AI can and cannot do. AI is a tool that can improve efficiency when used appropriately, but it cannot replace the clinical judgment, empathic engagement, and ethical reasoning that define competent behavior-analytic practice. Your value as a BCBA lies in your ability to integrate data, context, relationships, and professional expertise in ways that no algorithm can replicate.
Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.
Integrating Artificial Intelligence and ABA Services: Ethical Considerations for Today's Provider — Rebecca Womack · 1 BACB Ethics CEUs · $20
Take This Course →All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.