Starts in:

Pause Before Proceeding: Ethics of AI and Machine Learning in Behavior Analytic Practice

Source & Transformation

This guide draws in part from “Pause Before Proceeding: Ethical Considerations Around the Clinical Use of Artificial Intelligence (AI) and Machine Learning (ML)” by Rebecca Womack, MS, BCBA, LBA (BehaviorLive), and extends it with peer-reviewed research from our library of 27,900+ ABA research articles. Citations, clinical framing, and cross-links below are synthesized by Behaviorist Book Club.

View the original presentation →
In This Guide
  1. Overview & Clinical Significance
  2. Background & Context
  3. Clinical Implications
  4. Ethical Considerations
  5. Assessment & Decision-Making
  6. What This Means for Your Practice

Overview & Clinical Significance

The rapid advancement of artificial intelligence and machine learning technologies presents behavior analysts with a challenge that the field has not previously faced at this scale: how to integrate powerful new tools into clinical practice while maintaining the ethical standards that define the profession. This course, presented by Rebecca Womack, takes a deliberately cautious approach, urging practitioners to pause and carefully examine the ethical terrain before adopting AI and ML technologies in their clinical work.

The clinical significance of this cautious approach cannot be overstated. Unlike many technological innovations that affect administrative processes, AI and ML tools have the potential to directly influence clinical decision-making, alter the therapeutic relationship, and reshape how behavior analysts conceptualize and respond to client behavior. When an AI system analyzes session data and recommends a treatment modification, it is not merely saving time. It is inserting an algorithmic intermediary between the practitioner's clinical observations and their clinical actions.

The BACB Ethics Code (2022) does not explicitly mention artificial intelligence or machine learning, but its principles and standards provide a comprehensive ethical framework for evaluating these technologies. Core Principle 1.01 (Being Truthful) requires honesty about the capabilities and limitations of the tools we use. Core Principle 1.05 (Practicing within Scope of Competence) raises questions about whether behavior analysts who lack training in computer science can competently evaluate and implement AI tools. Core Principle 2.01 (Providing Effective Treatment) requires that treatment decisions be informed by the best available evidence, which means evaluating whether AI tools have been validated for the specific populations and clinical contexts in which they are being used.

The phrase pause before proceeding captures a disposition that is particularly important in a field driven by data and efficiency. The enthusiasm for tools that promise faster data analysis, automated documentation, and enhanced treatment outcomes can lead practitioners to adopt technologies before fully understanding their implications. This course argues that the ethical practitioner's first response to a new AI tool should not be excitement about its capabilities but careful examination of its ethical implications for all stakeholders.

Stakeholders in this context include not only clients and their families but also technicians, supervisors, organizations, insurance companies, and the broader field of behavior analysis. Each stakeholder group is affected differently by AI integration, and the ethical behavior analyst must consider all of these perspectives before proceeding.

Your CEUs are scattered everywhere.Between what you earn here, your employer, conferences, and other providers — it adds up fast. Upload any certificate and just know where you stand.
Try Free for 30 Days

Background & Context

The integration of AI and ML into healthcare is not new. Medical imaging, genomics, drug discovery, and clinical decision support systems have been using these technologies for years, with varying degrees of success and controversy. What is relatively new is the availability of AI tools specifically designed for or marketed to behavioral health providers, including ABA practitioners.

The healthcare AI landscape provides important cautionary lessons for behavior analysis. Multiple studies have documented cases where AI systems in healthcare perpetuated racial bias, produced inaccurate predictions for underrepresented populations, and created a false sense of certainty that led clinicians to override their own clinical judgment inappropriately. These are not hypothetical risks. They are documented outcomes that have affected real patients and should inform how behavior analysts approach AI adoption.

Machine learning, a subset of AI, is particularly relevant to behavior analysis because it excels at pattern recognition in large datasets, which is precisely what behavioral data analysis involves. ML algorithms can identify correlations between environmental variables and behavioral outcomes, predict skill acquisition trajectories, and detect subtle changes in behavioral patterns that might escape human notice. These capabilities are genuinely useful, but they come with significant limitations.

ML models are only as good as the data they are trained on. If the training data is biased, incomplete, or unrepresentative of the population being served, the model's outputs will reflect those limitations. In ABA, where clients span an enormous range of diagnoses, ages, cultural backgrounds, and functional levels, ensuring that AI tools have been validated across this diversity is a substantial challenge that most AI vendors have not adequately addressed.

The regulatory environment for AI in healthcare is evolving but remains fragmented. The FDA has begun to regulate certain AI tools as medical devices, but the criteria for regulation are inconsistent, and many tools used in ABA practice fall outside current regulatory frameworks. State licensing boards have been largely silent on AI use. The BACB has not issued specific guidance. This regulatory gap means that practitioners are largely on their own in determining how to use AI ethically.

Rebecca Womack's approach to this topic reflects a philosophy of informed caution. Rather than either embracing or rejecting AI wholesale, the course encourages practitioners to develop a systematic framework for evaluating AI tools against the ethical principles they already hold. This is a mature, professional response to a technology that is neither inherently beneficial nor inherently harmful but requires careful consideration to implement responsibly.

Clinical Implications

The clinical implications of AI and ML in behavior analysis extend across the entire service delivery spectrum, and each area requires careful ethical consideration. This section examines the implications for clinical judgment development, data interpretation, treatment individualization, and the therapeutic relationship.

Clinical judgment is perhaps the competency most at risk from premature or poorly managed AI integration. Clinical judgment in behavior analysis is developed through years of supervised experience analyzing behavioral data, formulating hypotheses, testing those hypotheses through intervention, observing outcomes, and refining one's understanding. This iterative process produces practitioners who can make nuanced decisions in ambiguous situations. When AI systems short-circuit this process by providing pre-analyzed data and pre-formed recommendations, less experienced practitioners may not develop the foundational reasoning skills they need.

Consider a new BCBA who has always used an AI tool to analyze functional assessment data. When the tool identifies attention as the maintaining function, the BCBA implements an attention-based intervention. But what if the tool's analysis is wrong? A practitioner who developed their clinical judgment through manual data analysis would have the skills to question the result, examine the data independently, and consider alternative interpretations. A practitioner who has relied on AI from the beginning may lack the confidence or competence to challenge the algorithm.

Data interpretation is another area of concern. AI tools can generate impressive visualizations and statistical analyses, but the clinical meaning of those analyses depends on the practitioner's ability to contextualize them. A graph showing a downward trend in challenging behavior may look encouraging, but the practitioner needs to consider whether the measurement system is capturing all relevant behaviors, whether the environment has changed in ways that affect the data, and whether the reduction is clinically meaningful or merely statistically detectable. AI tools cannot make these contextual judgments.

Treatment individualization is a core principle of ABA and a requirement of the Ethics Code (2022). AI recommendations are inherently based on patterns identified across multiple cases, which means they reflect what has worked for similar clients on average. But ABA's commitment to single-subject design and individualized treatment plans exists precisely because individual clients often do not behave like the average. Practitioners must be prepared to override AI recommendations when their knowledge of the individual client suggests a different course of action.

The therapeutic relationship may also be affected by AI integration. When a parent learns that an AI system rather than a human clinician analyzed their child's data and generated treatment recommendations, their trust in the service may be enhanced, diminished, or complicated depending on their attitudes toward technology, their understanding of how the AI works, and how the practitioner communicates about the AI's role. Maintaining transparency about AI involvement is both an ethical obligation and a relationship management necessity.

FREE CEUs

Get CEUs on This Topic — Free

The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.

60+ on-demand CEUs (ethics, supervision, general)
New live CEU every Wednesday
Community of 500+ BCBAs
100% free to join
Join The ABA Clubhouse — Free →

Ethical Considerations

The ethical considerations surrounding AI and ML in behavior analysis are numerous and interconnected. This section examines the most critical ethical dimensions through the lens of the BACB Ethics Code (2022).

Competence boundaries are a fundamental concern. Core Principle 1.05 requires behavior analysts to practice within their scope of competence. Most behavior analysts have received no formal training in computer science, machine learning, statistics beyond basic descriptive methods, or algorithmic bias. Using an AI tool without understanding how it generates its outputs may constitute practicing beyond one's competence. This creates an obligation for professional development in AI literacy and for honest self-assessment of whether you understand the tools you are using well enough to evaluate their outputs critically.

Informed consent requires updating. Core Principle 2.04 requires that clients and stakeholders receive adequate information about the nature of services. When AI tools are involved in any aspect of assessment, treatment planning, data analysis, or documentation, clients have a right to know. This means explaining in accessible terms what AI tools are being used, what data they process, how their outputs influence clinical decisions, and what safeguards are in place to ensure accuracy. Consent should include the option to decline AI involvement in their care.

Data privacy and confidentiality present heightened risks. Core Principle 2.06 requires protection of client information. Many AI tools require that client data be transmitted to external servers for processing. This creates exposure that does not exist with traditional data analysis methods. Practitioners must understand where their data goes, how it is stored, who can access it, whether it is used to train the vendor's models, and what happens to it when the vendor relationship ends. Many practitioners lack the technical knowledge to evaluate these factors, which creates a competence gap that must be addressed.

Algorithmic bias has implications for equity and fairness. If an AI tool produces systematically different recommendations for clients from different demographic backgrounds, and the practitioner implements those recommendations without recognizing the bias, they may be contributing to disparities in treatment quality. Core Principle 1.07 (Cultural Responsiveness and Diversity) requires behavior analysts to be attentive to how their practices affect diverse populations. This extends to the tools they use.

Accountability and responsibility cannot be delegated to technology. Regardless of what an AI system recommends, the behavior analyst who implements the recommendation bears full professional and ethical responsibility for the outcome. This principle must be clearly understood and consistently applied. If an AI-recommended intervention causes harm, the defense that the AI told me to do it is both ethically and legally insufficient.

Transparency in reporting is also at stake. Core Principle 2.13 requires accuracy in billing and reporting. If AI-generated documentation contains errors, and the behavior analyst signs off on those errors without catching them, the behavior analyst is responsible. This requires that every AI-generated document be reviewed carefully before it becomes part of the clinical or billing record.

Assessment & Decision-Making

This course recommends a structured ethical decision-making framework for evaluating AI and ML tools before adoption. This framework should be applied to each specific tool in each specific clinical context, because the ethical considerations may differ significantly depending on the tool's capabilities, the population being served, and the clinical setting.

Step one is to identify all stakeholders who will be affected by the AI tool's use. This includes clients and their families, direct care staff, supervisors, the organization, insurance companies, and the broader field. For each stakeholder group, consider both the potential benefits and the potential risks of AI adoption.

Step two is to map the AI tool's functions onto specific provisions of the BACB Ethics Code (2022). Which ethical principles are implicated by this tool's use? Core Principle 2.01 is relevant if the tool influences treatment decisions. Core Principle 2.06 is relevant if the tool processes confidential data. Core Principle 1.05 is relevant if the tool requires competencies the practitioner does not possess. This mapping exercise ensures that the ethical analysis is comprehensive rather than focused on a single concern.

Step three is to evaluate the evidence base. Has the tool been validated in peer-reviewed research? Has it been tested with populations similar to your clients? Are the validation studies independent of the tool's developer? What are the tool's documented error rates, and are those error rates acceptable for clinical use? If the evidence base is thin or nonexistent, the ethical risk of adoption is higher.

Step four is to conduct a data governance assessment. Where does client data go? Who has access? How is it protected? Is it used to train models? What happens if the vendor is acquired, goes bankrupt, or changes its terms of service? Can you ensure compliance with HIPAA and applicable state privacy laws? These questions must be answered before any client data enters an AI system.

Step five is to develop safeguard protocols. If you decide to adopt an AI tool, what checks will you put in place to ensure that its outputs are reviewed before influencing clinical decisions? How will you monitor for algorithmic bias? How will you maintain your independent clinical judgment? How will you update informed consent? These protocols should be documented and consistently followed.

Step six is ongoing evaluation. AI tools change over time as vendors update their algorithms and incorporate new training data. A tool that was appropriate six months ago may no longer be. Establish a regular schedule for reassessing the ethical appropriateness of any AI tools you use, and be prepared to discontinue tools that no longer meet your standards.

What This Means for Your Practice

The practical takeaway from this course is not that you should avoid AI entirely but that you should approach it with the same rigor you apply to any clinical decision. The pause in pause before proceeding is the ethical pause, the deliberate slowing down that allows you to think carefully before acting.

Before adopting any AI tool, complete the six-step ethical evaluation described above. Do not skip steps because a tool seems obviously beneficial or because colleagues are already using it. The ethical evaluation must be conducted for each tool in your specific clinical context.

Update your informed consent process now, even if you are not currently using AI tools. Technology adoption in healthcare accelerates rapidly, and having consent language that addresses technology use positions you to make ethical transitions when the time comes.

Invest in your own AI literacy. You do not need to become a data scientist, but you need to understand enough about how ML works to evaluate vendor claims, recognize potential biases, and make informed decisions about tool adoption. This is an emerging competency that all behavior analysts will need.

Maintain your independent clinical skills. Continue to analyze data manually, even if AI tools offer faster alternatives. Practice making clinical judgments without algorithmic assistance. These skills are your professional insurance against tool failures, vendor shutdowns, and algorithmic errors.

Finally, engage with your professional community on these issues. The field needs thoughtful practitioners who can contribute to the development of professional guidelines for AI use. Share your experiences, raise concerns, and participate in the conversations that will shape how behavior analysis navigates this technological transition.

Earn CEU Credit on This Topic

Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.

Pause Before Proceeding: Ethical Considerations Around the Clinical Use of Artificial Intelligence (AI) and Machine Learning (ML) — Rebecca Womack · 1 BACB Ethics CEUs · $20

Take This Course →

Research Explore the Evidence

We extended this guide with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.

Measurement and Evidence Quality

279 research articles with practitioner takeaways

View Research →

Brief Behavior Assessment and Treatment Matching

252 research articles with practitioner takeaways

View Research →

Brief Functional Analysis Methods

239 research articles with practitioner takeaways

View Research →
CEU Buddy

No scramble. No surprises.

You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.

Upload a certificate, everything else is automatic Works with any ACE provider $7/mo to protect $1,000+ in earned CEUs
Try It Free for 30 Days →

No credit card required. Cancel anytime.

Clinical Disclaimer

All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.

60+ Free CEUs — ethics, supervision & clinical topics