By Matt Harrington, BCBA · Behaviorist Book Club · April 2026 · 12 min read
Generative artificial intelligence has rapidly moved from a niche technology to a ubiquitous tool available to scientists, practitioners, and the general public. For behavior analysts, AI applications like ChatGPT and image generators represent both powerful tools and potential ethical minefields. This tutorial, presented by Kaitlynn Gokey, provides an accessible explanation of how generative AI works, the clinical applications and pitfalls for behavior analysts, and a framework for ethical use.
The clinical significance of understanding generative AI extends well beyond technological curiosity. Behavior analysts are already using these tools, whether they acknowledge it or not. AI is being used to draft behavior intervention plans, generate session notes, create training materials, analyze data, and produce visual supports. The question is not whether behavior analysts will use AI but whether they will use it with the same rigor and ethical awareness they bring to other clinical tools.
From a behavior-analytic perspective, generative AI raises fascinating conceptual questions as well. The title of this course references the classic question of whether machines can think, a question that behavior analysts are uniquely positioned to address. Skinner's analysis of verbal behavior provides a framework for understanding AI-generated text not as thought but as behavior shaped by contingencies, in this case the statistical patterns in the training data. This conceptual clarity is valuable because it helps practitioners understand both the capabilities and the limitations of AI-generated output.
The practical significance is immediate. Behavior analysts who understand how AI works can use it more effectively and avoid the common pitfalls that arise from treating AI output as authoritative. AI-generated text can sound confident and professional while being factually incorrect. AI-generated behavior plans can include plausible-sounding but inappropriate recommendations. AI-generated data analyses can produce misleading results. Understanding the mechanisms behind AI output is essential for using these tools safely in clinical practice.
This course also addresses the regulatory and professional landscape around AI use in healthcare and behavioral services. As funding bodies, regulatory agencies, and professional organizations develop policies around AI use, behavior analysts need to be prepared to comply with evolving standards while continuing to leverage AI tools where they add genuine value.
Generative AI refers to artificial intelligence systems that create new content, whether text, images, code, or other outputs, based on patterns learned from large datasets. The most widely known text-based AI systems, such as ChatGPT, are built on large language models (LLMs) that have been trained on vast amounts of text data from the internet, books, and other sources.
The fundamental mechanism of these models is statistical prediction. Given a sequence of text, the model predicts what text is most likely to come next, based on the patterns in its training data. This prediction is refined through a process of training that involves billions of parameters being adjusted to minimize prediction error. The result is a system that can generate fluent, contextually appropriate text on a remarkably wide range of topics.
From a behavior-analytic perspective, this process is analogous to verbal behavior shaped by a complex set of contingencies, except that the shaping occurs through mathematical optimization rather than environmental reinforcement in the traditional sense. The model does not understand the meaning of its output in the way that humans understand language. It does not have knowledge, beliefs, or intentions. It produces sequences of text that are statistically likely given the input it receives.
This distinction is critical for clinical applications. When a behavior analyst asks an AI to generate a behavior intervention plan, the output is not based on an understanding of the client, the behavior, or the principles of behavior analysis. It is based on statistical patterns in the text that the model has been trained on. If the training data includes well-written, evidence-based behavior plans, the output may be useful as a starting point. If the training data includes poorly written or inaccurate plans, the output may be misleading.
Image generation models, such as DALL-E 2, operate on similar principles but with visual data. These models learn to generate images that match text descriptions by training on large datasets of image-text pairs. For behavior analysts, image generators can be useful for creating visual supports, social stories, and educational materials. However, the same caution applies: the output reflects statistical patterns, not clinical judgment.
The rapid evolution of AI technology means that the capabilities and limitations described in any course will change over time. What remains constant is the need for behavior analysts to approach AI with the same scientific skepticism and ethical awareness they bring to any clinical tool. Understanding the mechanism behind AI output is the foundation for using it responsibly.
The clinical implications of generative AI for behavior analysts span several domains of practice, including documentation, assessment, intervention planning, data analysis, and client communication.
Documentation is perhaps the most common clinical application of AI for behavior analysts. Writing session notes, progress reports, behavior intervention plans, and assessment summaries is time-consuming, and AI can significantly accelerate the process. However, AI-generated documentation carries significant risks. The model may include information that is not accurate for the specific client, use language that does not match the practitioner's observations, or include recommendations that are not supported by the assessment data. Every piece of AI-generated clinical documentation must be carefully reviewed and edited by the behavior analyst before it is finalized.
Assessment support is another area where AI has potential utility. AI can assist in reviewing research literature related to a client's presenting concerns, generating assessment questions, or analyzing patterns in data. However, AI should never be used as a substitute for direct assessment. A functional behavior assessment requires direct observation, environmental analysis, and clinical judgment that AI cannot provide. Using AI to generate a functional analysis summary without conducting the actual assessment would be a serious ethical violation.
Intervention planning can benefit from AI as a brainstorming tool. When a behavior analyst is designing an intervention for a novel or complex case, AI can suggest approaches, identify relevant procedures, or generate intervention components that the practitioner might not have considered. The key is treating AI output as suggestions to be evaluated, not as recommendations to be implemented. The behavior analyst retains full responsibility for the clinical appropriateness of any intervention plan.
Data analysis is an area where AI shows particular promise and particular risk. AI tools can assist with data visualization, statistical analysis, and pattern identification. However, the behavior analyst must understand the analysis being performed and be able to evaluate whether the results are valid. AI can produce convincing-looking graphs and statistics that are based on inappropriate methods or incorrect assumptions. Data-based decision-making requires that the practitioner understand the data, not just the AI's summary of it.
Client communication materials, such as parent handouts, visual schedules, and social stories, can be generated or enhanced with AI assistance. This can improve the quality and accessibility of materials that support generalization and family involvement. However, materials must be reviewed for accuracy, cultural appropriateness, and alignment with the specific intervention plan.
Training and supervision represent emerging applications. AI can be used to create training scenarios, generate quiz questions, or simulate clinical situations for supervisees. These applications have the potential to enhance supervision quality, but they should supplement rather than replace direct supervision interactions.
The overarching clinical implication is that AI is a tool, not a clinician. It can enhance the efficiency and quality of behavior-analytic practice when used appropriately, but it cannot replace the clinical judgment, ethical reasoning, and human relationships that are central to effective service delivery.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
The ethical implications of AI use in behavior-analytic practice are substantial and touch on multiple elements of the BACB Ethics Code.
Code 2.01 (Providing Effective Treatment) requires that behavior analysts use the best available evidence to inform their practice. When AI is used to generate clinical content, the behavior analyst must verify that the content is consistent with the evidence base. AI-generated text can include plausible-sounding statements that are not supported by research or that misrepresent the evidence. Using AI-generated content without verification is inconsistent with the commitment to evidence-based practice.
Code 1.01 (Being Truthful) has direct implications for AI use. If a behavior analyst submits AI-generated documentation, reports, or communications as their own work without disclosure, this raises questions of truthfulness. The evolving professional norm is toward transparency about AI use. Behavior analysts should be prepared to disclose when AI has been used in the preparation of clinical materials and should never represent AI-generated content as the product of their own original analysis when it is not.
Code 3.10 (Documenting Professional Work and Research) requires that documentation be accurate and reflect the services actually provided. AI-generated session notes that include information the behavior analyst did not observe, or that describe procedures that were not actually implemented, violate this requirement. The speed advantage of AI-generated documentation can tempt practitioners to accept output without adequate review, creating significant documentation accuracy risks.
Code 2.18 (Confidentiality) raises critical concerns about data privacy. Many AI tools operate through cloud-based systems that process and potentially store the data they receive. If a behavior analyst inputs client-identifying information into an AI system, that information may be transmitted, stored, or used in ways that violate confidentiality requirements. Behavior analysts must understand the data handling practices of any AI tool they use and ensure that client confidentiality is maintained. This may require using AI tools that offer enhanced privacy protections, de-identifying data before input, or avoiding AI use altogether for certain types of clinical information.
Code 1.04 (Integrity) encompasses the broader obligation to practice with honesty and transparency. The use of AI to produce clinical work product raises questions about professional integrity when the contribution of AI is not disclosed. As the profession develops norms around AI use, behavior analysts should err on the side of transparency.
Code 1.02 (Boundaries of Competence) applies to the use of AI itself as a clinical tool. A behavior analyst who uses AI without understanding its capabilities and limitations is using a tool outside their competence. This course addresses this concern by providing the foundational knowledge needed to use AI responsibly.
The ethical landscape around AI in healthcare is evolving rapidly. Behavior analysts should stay informed about developing guidelines from the BACB, state licensing boards, and regulatory agencies regarding the use of AI in clinical practice.
Deciding when and how to use generative AI in behavior-analytic practice requires a systematic evaluation process. Not every task benefits from AI assistance, and some tasks carry ethical risks that outweigh the efficiency gains.
The first decision point is whether AI use is appropriate for the specific task. Tasks that involve routine language generation, such as drafting initial templates, formatting reports, or creating educational materials, are generally lower risk for AI assistance. Tasks that require clinical judgment, such as functional assessment interpretation, intervention selection, or progress evaluation, are higher risk because AI output may be accepted uncritically when the content sounds authoritative.
The second decision point concerns data privacy. Before using any AI tool, assess whether the task requires inputting client-identifying information. If it does, evaluate the data handling practices of the AI tool. Is the data transmitted to external servers? Is it stored? Is it used to train future models? If client data cannot be adequately protected, do not use the tool for that task. Consider using AI only with de-identified information or using locally hosted AI models that do not transmit data externally.
The third decision point is verification feasibility. Can you effectively verify the accuracy and appropriateness of the AI output? For tasks where you have deep expertise, verification is straightforward because you can quickly identify errors or inappropriate content. For tasks where your expertise is limited, AI output may contain errors you cannot detect. Use AI only for tasks where you can provide competent quality assurance.
A practical decision-making framework for AI use in clinical practice includes the following steps: (1) identify the task and its clinical significance, (2) assess whether AI can add value in terms of efficiency or quality, (3) evaluate data privacy risks and implement protections, (4) generate AI output with clear, specific prompts, (5) review all output for accuracy, clinical appropriateness, and consistency with the evidence base, (6) edit the output to reflect your professional judgment and the specific clinical context, and (7) document any AI use as appropriate.
For specific clinical applications, consider these guidelines. For documentation, use AI to generate initial drafts but always review and edit to ensure accuracy. For intervention planning, use AI to brainstorm ideas but evaluate each suggestion against the clinical evidence and the specific client's needs. For data analysis, use AI to assist with visualization or computation but verify the methods and results independently. For research support, use AI to identify potential sources but verify all citations, as AI frequently generates plausible but fabricated references.
The decision to use AI should be revisited regularly as the technology evolves, professional norms develop, and your own competence with AI tools changes. What is appropriate today may need to be reassessed as new capabilities and risks emerge.
Generative AI is already part of the behavior-analytic landscape, and its role will only grow. This course provides the foundational understanding you need to engage with these tools responsibly rather than either embracing them uncritically or avoiding them entirely.
Start by understanding how AI works at a conceptual level. You do not need to understand the mathematics of neural networks, but you do need to understand that AI generates output based on statistical patterns, not understanding. This knowledge will help you maintain appropriate skepticism about AI output and avoid the common error of treating AI-generated content as authoritative.
Develop a personal policy for AI use in your practice. Decide which tasks you will use AI for, what safeguards you will implement, and how you will handle data privacy. Share this policy with your supervisees and colleagues, and be transparent with clients about your use of AI when appropriate.
Invest in learning to use AI tools effectively. The quality of AI output depends significantly on the quality of the input. Learning to write clear, specific prompts that guide AI toward useful output is a skill that improves with practice. Experiment with different approaches to prompting and evaluate the results critically.
Stay informed about the evolving ethical and regulatory landscape. Professional organizations, state licensing boards, and regulatory agencies are developing policies around AI use in healthcare. Monitor these developments and update your practice accordingly.
Finally, maintain the behavior-analytic perspective. AI is a tool shaped by contingencies, just like any other behavior. Understanding those contingencies, the training data, the optimization process, and the limitations of statistical prediction, empowers you to use the tool wisely rather than being used by it.
Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.
Sometimes the Question IS Whether Machines Think: An AI Tutorial for Behavior Analysts — Kaitlynn Gokey · 1 BACB Ethics CEUs · $20
Take This Course →All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.