This guide draws in part from “AI and ABA: Friends or Foes?” by Cynthia Anderson, PhD (BehaviorLive), and extends it with peer-reviewed research from our library of 27,900+ ABA research articles. Citations, clinical framing, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →Artificial intelligence has arrived in behavior analysis, not as a distant possibility but as a present reality that practitioners are already encountering in their daily work. Whether through AI-powered note generation tools, machine learning algorithms in assessment platforms, or generative AI chatbots that families consult before calling their BCBA, the technology is reshaping the landscape in which behavior analytic services are delivered.
The distinction between generative AI and other forms of artificial intelligence is fundamental to making informed decisions about its role in clinical practice. Traditional AI systems, including the machine learning models that power speech recognition or image classification, are designed to perform specific tasks based on patterns in training data. Generative AI, exemplified by large language models, produces novel content: text, images, code, and plans that did not exist in their training data. This generative capacity is what makes the technology simultaneously powerful and concerning for behavior analysts.
Powerful because generative AI can draft clinical reports, generate behavior intervention plan templates, summarize research literature, and produce parent-friendly explanations of behavioral concepts in seconds rather than hours. Concerning because the output can be plausible, well-formatted, and completely wrong. Generative AI does not understand behavior analysis. It predicts likely word sequences based on statistical patterns. When those patterns align with accurate behavioral content, the output is useful. When they do not, the output can contain fabricated references, inaccurate procedural descriptions, and recommendations that violate basic principles of the science.
For behavior analysts, the clinical significance of AI lies not in whether to use it but in developing the critical evaluation skills needed to use it responsibly. The practitioner who blindly adopts AI-generated clinical content risks compromising treatment quality. The practitioner who categorically refuses to engage with AI risks falling behind in efficiency while colleagues leverage the technology effectively. The informed practitioner who understands what AI can and cannot do, who treats its output as a first draft requiring expert review rather than a finished product, occupies the most defensible and productive position.
This presentation addresses all three orientations by grounding the discussion in what AI actually is, distinguishing realistic from unrealistic concerns, and identifying specific workflows where AI can enhance a BCBA's effectiveness without compromising clinical integrity.
While public awareness of AI surged with the release of ChatGPT in late 2022, the underlying technology has been developing for decades. The field of artificial intelligence began in the 1950s with ambitious goals about creating machines that could think, reason, and learn. Progress was uneven, marked by periods of enthusiasm and funding followed by disappointments and withdrawal, the so-called AI winters.
The current era of AI capability is powered by a specific technological approach: deep learning using neural networks trained on massive datasets. These models identify statistical patterns in data and use those patterns to make predictions or generate content. The key insight is that these systems do not possess understanding, intention, or clinical judgment. They are extraordinarily sophisticated pattern-matching engines.
For behavior analysts, this distinction is both professionally and philosophically significant. Our science is rooted in the functional analysis of behavior, in understanding the environmental variables that maintain responding. AI systems operate through an entirely different mechanism, one that is statistical rather than functional. When an AI generates a behavior intervention plan, it is producing text that statistically resembles behavior intervention plans in its training data. It is not conducting a functional analysis, considering the client's reinforcement history, or weighing the ethical implications of its recommendations.
The ABA field's engagement with AI is unfolding against a backdrop of broader healthcare adoption. Electronic health record systems increasingly incorporate AI features. Insurance companies are using machine learning to review authorization requests. Research institutions are deploying AI to accelerate literature reviews and data analysis. These developments mean that behavior analysts will encounter AI not only as a tool they choose to adopt but as a feature of the systems within which they work.
Several specific AI applications have gained traction in ABA practice. Note generation tools that convert session data into narrative progress notes save significant clinical time. Literature search tools that summarize research findings can accelerate evidence-based decision-making. Administrative AI that handles scheduling, billing, and communication tasks reduces the non-clinical burden on practitioners. Each of these applications carries its own risk-benefit profile that behavior analysts need to evaluate on a case-by-case basis.
The regulatory landscape surrounding AI in healthcare is evolving rapidly but remains largely undefined for behavioral health specifically. HIPAA considerations apply when AI tools process protected health information. State licensing boards are beginning to issue guidance on AI use in clinical practice. The BACB has not yet issued comprehensive guidance on AI, leaving practitioners to navigate the intersection of AI capabilities and professional ethics largely on their own.
The practical question for most behavior analysts is not whether AI will affect their work but where it can add value without introducing unacceptable risk. Mapping AI capabilities onto the typical BCBA workflow reveals a spectrum from high-value, low-risk applications to high-risk territory that demands extreme caution.
At the low-risk end, AI excels at administrative and organizational tasks. Generating scheduling templates, formatting data displays, drafting routine correspondence, and organizing reference materials are tasks where AI output is easily verified and the consequences of error are minimal. A BCBA who uses AI to generate a first draft of a parent-friendly handout about token economies, then reviews and edits that draft for accuracy and appropriateness, has saved time without compromising quality.
In the middle of the spectrum, AI can assist with clinical documentation. Progress note generation, treatment plan formatting, and data summary preparation are tasks where AI can accelerate workflow but where the output must be carefully reviewed. The risk here is not that AI will generate obviously wrong content but that it will generate subtly inaccurate content that a busy practitioner approves without adequate review. A progress note that slightly mischaracterizes a client's response pattern or an assessment summary that overstates treatment progress could have significant downstream consequences.
At the high-risk end, AI-generated clinical recommendations should be treated with extreme skepticism. When asked to suggest intervention strategies, generate functional hypotheses, or recommend assessment instruments, AI draws on statistical patterns in its training data rather than the specific clinical context of the case. The output may sound authoritative and use appropriate terminology while recommending approaches that are inappropriate for the specific client, contraindicated given the functional analysis, or unsupported by the current evidence base.
For BCBAs working with families, AI introduces an additional consideration: families are using it too. Parents are increasingly turning to AI chatbots for information about their child's diagnosis, treatment options, and their rights within the service system. The information they receive varies wildly in quality. BCBAs who understand what AI tells families about ABA can proactively address misconceptions and position themselves as trusted interpreters of the information families encounter.
Data privacy represents a non-negotiable concern. Any AI tool that processes client information must comply with HIPAA regulations and organizational privacy policies. Many popular AI tools send user input to remote servers for processing, which means that entering client names, identifying information, or clinical details into a general-purpose AI chatbot may constitute a privacy violation. BCBAs must verify the data handling practices of any AI tool before using it with client information, and many organizations are developing policies that specify which AI tools are approved for use with protected health information.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
The intersection of AI and the BACB Ethics Code raises questions that the code's authors could not have fully anticipated. Nevertheless, existing ethical principles provide substantial guidance for navigating this emerging territory.
Competence boundaries (Code 1.05) are immediately relevant. A BCBA who uses AI to generate content in areas where they lack expertise, and then presents that content as their own professional work, is effectively practicing outside their competence. The AI may produce a plausible behavioral feeding assessment protocol, but if the BCBA has no training in feeding disorders, they cannot evaluate whether the protocol is appropriate, safe, or complete. Using AI output without the expertise to critically evaluate it represents a competence violation regardless of how polished the output appears.
The responsibility for clinical decisions remains with the practitioner, a principle that becomes critical as AI tools become more sophisticated. When a BCBA signs a treatment plan, they are attesting that the plan reflects their professional judgment. If that plan was generated by AI and approved without thorough clinical review, the BCBA has attached their professional credential to work that may not meet professional standards. No AI tool can assume professional liability, and no amount of AI assistance diminishes the practitioner's responsibility for the quality and appropriateness of services provided.
Transparency obligations arise when considering whether to disclose AI use to clients and families. The ethics code's emphasis on honest, transparent communication (Code 3.01) suggests that when AI plays a meaningful role in generating clinical content, families have a right to know. This does not mean that every spell-checked email requires a disclosure. But when AI generates a significant portion of a treatment plan, assessment report, or clinical recommendation, transparency requires acknowledgment.
Intellectual honesty intersects with AI use in research and professional communication. If a behavior analyst uses AI to draft a conference presentation, literature review, or published article, professional norms require disclosure. Presenting AI-generated work as one's own original scholarship is a form of misrepresentation that undermines the integrity of professional communication.
Supervision and training contexts add another ethical dimension. Supervisors using AI to generate feedback, training materials, or competency assessments should ensure that the AI output accurately reflects the supervisee's performance and the supervisor's professional judgment. A supervisor who relies on AI-generated feedback without personally reviewing the supervisee's work is not providing the individualized oversight that effective supervision requires.
Beyond individual ethics, organizational responsibilities include developing clear AI use policies, providing training on appropriate use, and establishing review processes for AI-generated clinical content. Organizations that adopt AI tools without corresponding governance structures create conditions for systematic ethical risk.
Deciding whether and how to incorporate AI into your practice requires a structured evaluation process rather than blanket adoption or rejection. A useful framework examines each potential AI application along four dimensions: task suitability, output verifiability, privacy implications, and professional responsibility.
Task suitability asks whether the task is one where AI's pattern-matching capabilities align with the work being done. Formatting, summarizing, drafting routine language, and organizing information are tasks where AI's strengths match the demands. Functional analysis, individualized treatment planning, ethical reasoning, and clinical judgment are tasks where AI's limitations make it unsuitable as a primary tool. Many tasks fall between these extremes and require case-by-case evaluation.
Output verifiability asks whether you can reliably determine if the AI's output is correct. For a data summary, you can check the numbers against the raw data. For a literature search result, you can verify that cited studies actually exist and say what the AI claims they say. For a clinical recommendation, verifiability depends on your expertise in the relevant area. If you cannot independently evaluate whether an AI-generated recommendation is appropriate, you should not rely on it.
Privacy implications require explicit assessment before any client-related information is entered into an AI system. Key questions include where the data is processed and stored, whether the AI provider retains input data for model training, whether the tool has a business associate agreement for HIPAA compliance, and whether your organization's privacy policies address AI tool use. When in doubt, anonymize or use synthetic data.
Professional responsibility clarifies who is accountable for outcomes when AI is involved. The answer is always the practitioner. This means you must review, verify, and take ownership of any AI-generated content before it enters the clinical record or reaches a client. If a task is important enough to affect client care, it is important enough for human expert review of any AI contribution.
Beyond individual decision-making, teams and organizations should develop shared protocols for AI use. These protocols should specify which tools are approved, which tasks AI may assist with, what review processes are required, how AI use is documented, and how policies will be updated as the technology evolves. A shared protocol prevents the inconsistency that arises when individual practitioners make independent judgments about AI appropriateness.
For practitioners evaluating specific AI tools, a trial period with non-clinical content is advisable. Use the tool to generate content about topics where you can easily evaluate accuracy, observe its tendencies and limitations, and develop a calibrated sense of when its output is reliable and when it is not. This calibration process is essential because AI tools vary significantly in quality across different domains and tasks.
The behavior analysts best positioned for the AI era are those who develop two complementary skills: the technical literacy to understand what AI tools can and cannot do, and the clinical judgment to know when each tool is appropriate.
Begin by auditing your current workflow for tasks where AI could save significant time with minimal risk. Clinical note formatting, email drafting, scheduling optimization, and resource compilation are common starting points. For each task, identify a specific AI tool, test it with non-clinical content, evaluate its output quality, and develop a review checklist before deploying it in your clinical workflow.
Establish firm boundaries around high-stakes clinical content. Treatment plans, functional analyses, ethical decisions, and any content that directly shapes a client's intervention should always originate from your professional judgment. AI may assist with formatting or organization of this content, but the substantive clinical reasoning must be yours.
Stay current with your organization's AI policies and advocate for policy development if none exists. As the technology evolves rapidly, organizations that fail to develop governance structures will find themselves managing problems reactively rather than preventing them proactively.
Engage with AI-related professional development. The BACB and state associations will increasingly offer guidance on AI use. Professional listservs and conferences are beginning to address these topics. Your understanding of AI's role in behavior analysis should develop alongside the technology itself, ensuring that your practice remains both efficient and ethically grounded as the tools available to you continue to change.
Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.
AI and ABA: Friends or Foes? — Cynthia Anderson · 1 BACB Ethics CEUs · $10
Take This Course →We extended this guide with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
280 research articles with practitioner takeaways
279 research articles with practitioner takeaways
252 research articles with practitioner takeaways
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.