Starts in:

Frequently Asked Questions About Ethical AI Use in Behavior Analysis

Source & Transformation

These answers draw in part from “Pause Before Proceeding: Ethical Considerations Around the Clinical Use of Artificial Intelligence (AI) and Machine Learning (ML)” by Rebecca Womack, MS, BCBA, LBA (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.

View the original presentation →
Questions Covered
  1. Why does this course recommend pausing before adopting AI tools rather than embracing them quickly?
  2. How does the BACB Ethics Code (2022) apply to AI even though it does not mention it explicitly?
  3. What specific risks does machine learning pose for individualized treatment in ABA?
  4. What should I look for when evaluating the evidence base of an AI tool for ABA?
  5. How can AI affect the development of clinical judgment in new behavior analysts?
  6. What data privacy questions should I ask an AI vendor before using their tool?
  7. How should I communicate with families about AI use in their child's care?
  8. What is algorithmic bias and how would it affect ABA services?
  9. Can I be held responsible for errors made by an AI system I use?
  10. What ethical recommendations does this course offer for behavior analysts using technology?
Your CEUs are scattered everywhere.Between what you earn here, your employer, conferences, and other providers — it adds up fast. Upload any certificate and just know where you stand.
Try Free for 30 Days

1. Why does this course recommend pausing before adopting AI tools rather than embracing them quickly?

The recommendation to pause reflects a precautionary approach to technologies that can directly influence clinical decisions affecting vulnerable populations. AI tools in ABA may affect treatment recommendations, data interpretation, and the therapeutic relationship. Unlike administrative tools that primarily affect efficiency, clinical AI tools create risks including algorithmic bias, erosion of clinical judgment, data privacy exposure, and accountability ambiguity. The BACB Ethics Code (2022) requires competent, evidence-based, and individually appropriate practice. Pausing allows practitioners to evaluate whether specific AI tools meet these requirements before client care is affected. This is not anti-technology; it is pro-responsibility.

2. How does the BACB Ethics Code (2022) apply to AI even though it does not mention it explicitly?

The Ethics Code's principles are written broadly enough to apply to emerging technologies including AI. Core Principle 1.05 (Practicing within Scope of Competence) applies to the practitioner's ability to evaluate AI tools. Core Principle 2.01 (Providing Effective Treatment) requires that treatment decisions be evidence-based, which extends to evaluating the evidence for AI tools. Core Principle 2.06 (Maintaining Confidentiality) applies to data processed by AI systems. Core Principle 2.13 (Accuracy in Billing and Reporting) applies to AI-generated documentation. The absence of explicit AI language does not mean the Code is silent; practitioners must apply general principles to specific technological contexts.

3. What specific risks does machine learning pose for individualized treatment in ABA?

Machine learning models identify patterns in aggregate data and generate recommendations based on what has worked for similar cases. This fundamentally tensions with ABA's commitment to individualized, single-subject design approaches. A model trained on thousands of cases may recommend an intervention that was effective on average but is inappropriate for a specific client due to unique cultural factors, co-occurring conditions, environmental variables, or family preferences that the model cannot capture. Practitioners who defer to ML recommendations without applying their individualized knowledge of the client risk providing population-level care to individuals who need personalized attention.

4. What should I look for when evaluating the evidence base of an AI tool for ABA?

Look for peer-reviewed research conducted independently of the tool's developer. Examine whether the tool has been validated with populations similar to your clients in terms of age, diagnosis, cultural background, and functional level. Review the tool's documented error rates and determine whether those rates are acceptable for clinical use. Ask the vendor whether the tool has been tested in real clinical settings or only in controlled research environments. Be skeptical of marketing materials that cite only internal company data or anecdotal testimonials. If no independent validation exists, consider the tool experimental and proceed with heightened caution and additional safeguards.

5. How can AI affect the development of clinical judgment in new behavior analysts?

Clinical judgment develops through the iterative process of analyzing data, forming hypotheses, testing interventions, observing outcomes, and refining understanding over thousands of clinical interactions. When AI tools perform data analysis and generate recommendations, less experienced practitioners may not develop the foundational reasoning skills this process builds. They may accept AI outputs uncritically because they lack the confidence or experience to question them. Over time, this creates practitioners who are dependent on technology for clinical reasoning rather than using technology to enhance reasoning they have independently developed. Supervisors should ensure trainees develop strong independent analysis skills before introducing AI assistance.

6. What data privacy questions should I ask an AI vendor before using their tool?

Essential questions include: Where is client data stored and in what jurisdiction? Who within the vendor organization has access to client data? Is client data used to train or improve the vendor's AI models? What encryption methods are used during data transmission and storage? What happens to client data if I discontinue the service? What happens to client data if the vendor is acquired, goes bankrupt, or changes ownership? Does the vendor have a HIPAA Business Associate Agreement? Has the vendor undergone independent security audits? Can data be deleted upon request? Unsatisfactory answers to any of these questions should give you serious pause about adoption.

7. How should I communicate with families about AI use in their child's care?

Communication should be transparent, accessible, and ongoing. Explain in non-technical language what AI tools you use, what they do, what data they process, and how their outputs influence clinical decisions. Emphasize that a qualified human professional reviews all AI outputs and makes final clinical decisions. Provide families with the opportunity to ask questions and to decline AI involvement in their child's care. Include AI-related information in your informed consent documents and revisit the conversation when new tools are introduced or existing tools change. Families have a right to understand every component of their child's care, including the technological components.

8. What is algorithmic bias and how would it affect ABA services?

Algorithmic bias occurs when an AI system produces systematically unfair outputs for certain demographic groups, typically because the training data reflects existing societal or clinical biases. In ABA, this could manifest as treatment recommendations that are less effective for clients from underrepresented racial or cultural groups, diagnostic predictions that are less accurate for certain populations, or documentation tools that mischaracterize culturally influenced behaviors. Because these biases are embedded in the algorithm rather than expressed explicitly, they can be difficult to detect without deliberate monitoring. The BACB Ethics Code (2022) requirement for cultural responsiveness (1.07) creates an obligation to evaluate tools for equitable performance across populations.

9. Can I be held responsible for errors made by an AI system I use?

Yes. The BACB Ethics Code (2022) places responsibility for clinical decisions and documentation on the behavior analyst, regardless of what tools were used to inform those decisions. If an AI system produces an erroneous treatment recommendation that you implement, you bear professional and ethical responsibility for the outcome. If an AI system generates inaccurate documentation that you sign, you bear responsibility for the inaccuracies. There is no ethical or legal framework that transfers clinical responsibility from a licensed practitioner to a technology tool. This makes careful review of all AI outputs a non-negotiable requirement of ethical practice.

10. What ethical recommendations does this course offer for behavior analysts using technology?

The course recommends several practices: conduct a thorough ethical evaluation before adopting any AI tool, including evidence review, data privacy assessment, and stakeholder impact analysis. Update informed consent to address AI use. Maintain independent clinical skills by regularly analyzing data and making decisions without AI assistance. Monitor AI outputs for signs of bias or inaccuracy. Establish review protocols that ensure human oversight of all AI-generated recommendations and documentation. Seek ongoing professional development in AI literacy. Engage with professional organizations to contribute to the development of field-specific AI guidelines. Above all, maintain the disposition of pausing to think ethically before acting technologically.

FREE CEUs

Get CEUs on This Topic — Free

The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.

60+ on-demand CEUs (ethics, supervision, general)
New live CEU every Wednesday
Community of 500+ BCBAs
100% free to join
Join The ABA Clubhouse — Free →

Earn CEU Credit on This Topic

Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.

Pause Before Proceeding: Ethical Considerations Around the Clinical Use of Artificial Intelligence (AI) and Machine Learning (ML) — Rebecca Womack · 1 BACB Ethics CEUs · $20

Take This Course →
📚 Browse All 60+ Free CEUs — ethics, supervision & clinical topics in The ABA Clubhouse

Research Explore the Evidence

We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.

Measurement and Evidence Quality

279 research articles with practitioner takeaways

View Research →

Brief Behavior Assessment and Treatment Matching

252 research articles with practitioner takeaways

View Research →

Brief Functional Analysis Methods

239 research articles with practitioner takeaways

View Research →

Related Topics

CEU Course: Pause Before Proceeding: Ethical Considerations Around the Clinical Use of Artificial Intelligence (AI) and Machine Learning (ML)

1 BACB Ethics CEUs · $20 · BehaviorLive

Guide: Pause Before Proceeding: Ethical Considerations Around the Clinical Use of Artificial Intelligence (AI) and Machine Learning (ML) — What Every BCBA Needs to Know

Research-backed educational guide with practice recommendations

Decision Guide: Comparing Approaches

Side-by-side comparison with clinical decision framework

CEU Buddy

No scramble. No surprises.

You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.

Upload a certificate, everything else is automatic Works with any ACE provider $7/mo to protect $1,000+ in earned CEUs
Try It Free for 30 Days →

No credit card required. Cancel anytime.

Clinical Disclaimer

All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.

60+ Free CEUs — ethics, supervision & clinical topics