These answers draw in part from “AI and ABA: Friends or Foes?” by Cynthia Anderson, PhD (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.
View the original presentation →Traditional AI systems perform specific tasks based on patterns in training data, such as classifying images, recognizing speech, or predicting numerical outcomes. Generative AI produces novel content, including text, images, and code, that did not exist in its training data. It works by predicting statistically likely sequences based on patterns learned during training. For behavior analysts, this distinction matters because generative AI can produce plausible-sounding clinical content that is factually incorrect, while traditional AI systems tend to produce outputs that are easier to verify against objective criteria.
No. AI operates through statistical pattern matching, not functional analysis or clinical reasoning. It cannot assess the environmental variables maintaining a client's behavior, weigh the ethical implications of intervention choices, or adapt in real time to the dynamic complexities of a clinical session. AI can assist with tasks that are well-defined and verifiable, but the individualized clinical judgment that defines competent behavior analytic practice remains a distinctly human professional responsibility.
Most general-purpose AI tools process user input on remote servers and may retain that input for model training, which creates significant privacy risks when client information is involved. Before using any AI tool with protected health information, verify that the tool has a HIPAA-compliant business associate agreement, that data is encrypted in transit and at rest, that client data is not used for model training, and that your organization's privacy policies address AI tool use. When in doubt, anonymize all identifying information before entering it into any AI system.
Realistic concerns include the generation of inaccurate clinical content that practitioners may not catch during review, privacy risks when client data is processed by AI systems, the potential for AI-generated content to replace rather than supplement professional judgment, deskilling effects if practitioners rely on AI for tasks that build clinical competence, and the risk that AI-driven efficiency pressures lead to reduced time for the relational aspects of clinical work that families value. These concerns can be mitigated through informed use, clear policies, and robust review processes.
Unrealistic concerns include the fear that AI will fully automate the BCBA role, that AI will autonomously make treatment decisions without practitioner involvement, or that AI will make behavior analysis obsolete. The science of behavior analysis addresses questions about functional relationships between environment and behavior that AI is not designed to answer. AI may change how BCBAs spend their time by automating routine tasks, but it does not replicate the observational, analytical, and relational competencies that define the profession.
AI can accelerate several time-intensive tasks without compromising clinical quality when appropriate review processes are in place. These include drafting and formatting clinical documentation, summarizing session data into narrative progress notes, generating parent-friendly educational materials, organizing literature search results, drafting routine correspondence, and creating data visualization templates. Each application saves clinical time that can be redirected to direct client services, family collaboration, and supervision.
When AI plays a meaningful role in generating clinical content such as treatment plans, assessment reports, or recommendations, transparency supports informed consent and professional trust. Routine use for administrative tasks like email drafting or scheduling likely does not require specific disclosure. The threshold for disclosure should be whether the AI's involvement materially affects the clinical content that families rely on for decision-making. Developing a clear organizational disclosure policy helps practitioners navigate this judgment consistently.
Effective AI policies should specify which tools are approved for use, which tasks AI may assist with and which it may not, what review processes are required for AI-generated content, how AI use is documented in clinical records, privacy requirements for AI tools that process client information, and how policies will be updated as technology and guidance evolve. Policies should be developed collaboratively with clinical staff, compliance officers, and legal counsel, and should be revisited at least annually given the pace of technological change.
Treat it as you would information from any unverified source: listen respectfully, evaluate the content for accuracy, and provide corrections where needed with clear explanations of why the information is incorrect or incomplete. Rather than dismissing the family's research, use it as an opportunity to strengthen your role as a trusted information source. Understanding what AI commonly tells families about ABA helps you proactively address misconceptions and demonstrate the value of professional expertise.
Start by using AI to generate content in areas where you have deep expertise, then systematically evaluate the output for accuracy. Note the types of errors AI tends to make, such as fabricated citations, overgeneralized recommendations, or confident statements about topics it handles poorly. Practice with progressively more complex content to calibrate your sense of when AI is reliable and when it is not. This calibration is the essential skill that separates responsible AI users from those who are at risk of propagating errors.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
AI and ABA: Friends or Foes? — Cynthia Anderson · 1 BACB Ethics CEUs · $10
Take This Course →We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.
280 research articles with practitioner takeaways
279 research articles with practitioner takeaways
252 research articles with practitioner takeaways
1 BACB Ethics CEUs · $10 · BehaviorLive
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.
No credit card required. Cancel anytime.
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.