Starts in:

Frequently Asked Questions About AI in Ethical ABA Practice

Source & Transformation

These answers draw in part from “Can Artificial Intelligence be used in the ethical application of Applied Behavior Analysis” by Laurie Bonavita, PhD, LABA, BCBA-D (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.

View the original presentation →
Questions Covered
  1. What types of AI applications are currently being used or developed for ABA therapy?
  2. What ethical safeguards should be in place before an ABA organization adopts AI tools?
  3. How should behavior analysts approach informed consent when AI is used in clinical services?
  4. What is algorithmic bias and why is it particularly concerning for ABA services?
  5. Can AI replace the clinical judgment of a BCBA in assessment and treatment planning?
  6. What are the data privacy risks of AI tools in ABA and how should they be managed?
  7. How should behavior analysts maintain their clinical competence in the age of AI?
  8. What questions should behavior analysts ask AI technology vendors before adoption?
  9. How might AI change the role of the behavior analyst in the future?
  10. What is the behavior analyst's responsibility when an AI system produces a recommendation they disagree with?
Your CEUs are scattered everywhere.Between what you earn here, your employer, conferences, and other providers — it adds up fast. Upload any certificate and just know where you stand.
Try Free for 30 Days

1. What types of AI applications are currently being used or developed for ABA therapy?

Current AI applications in ABA span several categories: data analysis tools that use machine learning to identify patterns in behavioral data and predict outcomes; natural language processing applications that analyze session notes and generate documentation; computer vision systems that assess treatment integrity through video analysis; decision-support tools that suggest treatment modifications based on outcome databases; administrative AI for scheduling, authorization management, and reporting; and chatbot-based parent training support systems. The maturity and validation of these applications varies significantly, with administrative tools being most established and clinical decision-support tools still in early development stages for most ABA-specific applications.

2. What ethical safeguards should be in place before an ABA organization adopts AI tools?

Essential safeguards include: validated effectiveness evidence showing the AI tool performs accurately for the specific populations and settings where it will be used; clear informed consent procedures that explain AI involvement to clients and families in understandable terms; robust data governance policies addressing ownership, storage, access, sharing, and deletion of client data; human oversight mechanisms ensuring qualified clinicians review AI outputs before they influence clinical decisions; bias assessment processes evaluating whether the AI produces equitable results across demographic groups; error reporting and correction systems; staff training on appropriate use and limitations of the tool; and a clear accountability framework establishing who is responsible when AI-assisted decisions result in adverse outcomes.

3. How should behavior analysts approach informed consent when AI is used in clinical services?

Informed consent for AI-assisted services should include, at minimum: a plain-language explanation of what AI tools are being used and what they do; how AI influences clinical decisions (distinction between AI as a tool versus AI as a decision-maker); what data is collected and processed by the AI system; who has access to that data, including the technology company; how data is stored, protected, and for how long; the client's right to refuse AI involvement if feasible; and the limitations and potential risks of AI in their specific services. This information should be provided in addition to standard informed consent elements and updated when AI tools change. Consent is an ongoing process (Code 2.03), not a one-time form signing.

4. What is algorithmic bias and why is it particularly concerning for ABA services?

Algorithmic bias occurs when AI systems produce systematically different or unfair outcomes for different groups, typically because the data used to train the system reflects existing disparities. This is particularly concerning for ABA because: the field serves diverse populations including racial and ethnic minorities, individuals with various disabilities, and families across socioeconomic strata; training datasets may over-represent certain demographic groups; outcome data may reflect disparities in service quality or access rather than genuine differences in response to treatment; and biased AI recommendations could perpetuate or amplify existing inequities in service delivery. Behavior analysts must evaluate AI tools for bias before implementation and monitor for differential outcomes across demographic groups.

5. Can AI replace the clinical judgment of a BCBA in assessment and treatment planning?

No. AI can augment clinical judgment by processing data more efficiently, identifying patterns that might be missed, and providing decision support, but it cannot replace the individualized, contextual clinical reasoning that behavior analytic practice requires. Functional behavior assessment involves understanding the unique relationships between an individual's behavior and their specific environment, including cultural context, family values, and setting-specific constraints that AI systems cannot fully capture. Treatment planning requires balancing clinical data with client preferences, family priorities, and practical feasibility in ways that require human judgment. The BACB Ethics Code holds individual practitioners responsible for clinical decisions (Code 2.01), and that responsibility cannot be delegated to an algorithm.

6. What are the data privacy risks of AI tools in ABA and how should they be managed?

AI tools in ABA process some of the most sensitive data in healthcare: detailed behavioral records, video recordings of therapy sessions, information about challenging behavior, and family dynamics. Privacy risks include data breaches exposing sensitive behavioral information, unauthorized access by technology company employees, use of client data for purposes beyond clinical care (product development, research, marketing), inadequate de-identification that allows re-identification, and data persistence after the clinical relationship ends. These risks should be managed through contractual data governance agreements with AI vendors, encryption and access controls, minimization of data collection to what is clinically necessary, regular security audits, clear data retention and deletion policies, and compliance with applicable privacy regulations.

7. How should behavior analysts maintain their clinical competence in the age of AI?

Maintaining clinical competence alongside AI integration requires deliberate effort to prevent skill atrophy. Continue practicing independent data analysis even when AI tools are available, so that your visual analysis and clinical reasoning skills remain sharp. Regularly evaluate AI outputs critically rather than accepting them automatically. Seek cases and situations that challenge your clinical reasoning beyond what AI can support. Stay current with both the behavior analytic literature and the AI technology literature relevant to your practice. Develop sufficient understanding of AI systems to know their limitations and failure modes. The Ethics Code (Code 1.06) requires competence that includes the ability to evaluate and appropriately use or override technological tools.

8. What questions should behavior analysts ask AI technology vendors before adoption?

Critical questions include: What evidence demonstrates effectiveness with ABA populations specifically? What data do you collect and how is it stored, protected, and used? Who owns the data? Can it be shared with or sold to third parties? How does the algorithm work and can you explain its outputs in clinically meaningful terms? What are the known limitations and failure modes? Has the system been evaluated for bias across demographic groups? What happens to data when the contract ends? How is the system maintained and updated? What is the liability framework when AI-assisted decisions result in adverse outcomes? How does the system comply with HIPAA and state privacy regulations? Behavior analysts should be as rigorous in evaluating technology vendors as they are in evaluating intervention evidence.

9. How might AI change the role of the behavior analyst in the future?

AI will likely shift the behavior analyst's role from routine data processing and documentation toward higher-level clinical reasoning, ethical decision-making, and relationship management. Tasks like data entry, basic trend analysis, and documentation may become increasingly automated, freeing practitioners for the complex clinical work that requires human judgment. Supervision may evolve to include AI-assisted monitoring and flagging alongside human mentorship and clinical development. The core competencies of functional analysis, individualized treatment design, ethical reasoning, and therapeutic relationship management will likely become more rather than less important as AI handles routine functions. Practitioners who develop strong clinical reasoning skills alongside technological literacy will be best positioned for this evolving landscape.

10. What is the behavior analyst's responsibility when an AI system produces a recommendation they disagree with?

The behavior analyst's clinical judgment takes precedence over AI recommendations. When you disagree with an AI output, you should: document your clinical reasoning for the alternative decision, evaluate whether the disagreement reveals a limitation of the AI system that should be reported, consider whether additional data collection could resolve the discrepancy, and consult with colleagues if the situation is complex. The Ethics Code holds individual practitioners responsible for clinical decisions (Code 2.01), not the AI system. Using an AI recommendation that contradicts your clinical judgment because it is easier than overriding the system does not absolve you of responsibility for the outcome. AI is a tool to inform, not a authority to defer to.

FREE CEUs

Get CEUs on This Topic — Free

The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.

60+ on-demand CEUs (ethics, supervision, general)
New live CEU every Wednesday
Community of 500+ BCBAs
100% free to join
Join The ABA Clubhouse — Free →

Earn CEU Credit on This Topic

Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.

Can Artificial Intelligence be used in the ethical application of Applied Behavior Analysis — Laurie Bonavita · 1 BACB Ethics CEUs · $20

Take This Course →
📚 Browse All 60+ Free CEUs — ethics, supervision & clinical topics in The ABA Clubhouse

Research Explore the Evidence

We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.

Measurement and Evidence Quality

279 research articles with practitioner takeaways

View Research →

Symptom Screening and Profile Matching

258 research articles with practitioner takeaways

View Research →

Brief Functional Analysis Methods

239 research articles with practitioner takeaways

View Research →

Related Topics

CEU Course: Can Artificial Intelligence be used in the ethical application of Applied Behavior Analysis

1 BACB Ethics CEUs · $20 · BehaviorLive

Guide: Can Artificial Intelligence be used in the ethical application of Applied Behavior Analysis — What Every BCBA Needs to Know

Research-backed educational guide with practice recommendations

Decision Guide: Comparing Approaches

Side-by-side comparison with clinical decision framework

CEU Buddy

No scramble. No surprises.

You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.

Upload a certificate, everything else is automatic Works with any ACE provider $7/mo to protect $1,000+ in earned CEUs
Try It Free for 30 Days →

No credit card required. Cancel anytime.

Clinical Disclaimer

All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.

60+ Free CEUs — ethics, supervision & clinical topics