Starts in:

The Ethics of AI Inaction in ABA: Frequently Asked Questions for BCBAs

Source & Transformation

These answers draw in part from “The Ethics of Inaction: Why NOT Using AI Could Violate Our Ethics Code” by Adam Ventura, PhD BCBA (BehaviorLive), and extend it with peer-reviewed research from our library of 27,900+ ABA research articles. Clinical framing, BACB ethics code references, and cross-links below are synthesized by Behaviorist Book Club.

View the original presentation →
Questions Covered
  1. How could NOT using AI be considered an ethical violation?
  2. Does this mean every BCBA must use AI in their practice?
  3. What specific AI applications are most relevant to ABA practice right now?
  4. How do you balance the risks of AI adoption against the risks of AI avoidance?
  5. What does the BACB Ethics Code say about staying current with technology?
  6. How can BCBAs develop AI competence without becoming technology experts?
  7. What about practitioners who work in settings where AI use is prohibited by organizational policy?
  8. Could AI eventually replace behavior analysts?
  9. How should the field evaluate AI tools for use in ABA practice?
  10. What is the strongest counterargument to the 'ethics of inaction' position?
Your CEUs are scattered everywhere.Between what you earn here, your employer, conferences, and other providers — it adds up fast. Upload any certificate and just know where you stand.
Try Free for 30 Days

1. How could NOT using AI be considered an ethical violation?

The argument rests on the Ethics Code's affirmative obligations — the requirements to benefit those we serve (Core Principle 1), to practice using the best available evidence (Code Section 2.01), and to maintain competence with developments relevant to professional practice (Core Principle 4). If specific AI applications can demonstrably improve service quality, efficiency, or access, then a categorical refusal to consider them may fall short of these affirmative obligations. The parallel to other clinical decisions is instructive. If a new evidence-based assessment tool or intervention strategy emerged and a practitioner refused to consider it without evaluation, that refusal would be recognized as potentially problematic. The same logic applies to AI tools. The ethical requirement is not to adopt every tool but to evaluate each one thoughtfully and adopt those that genuinely serve client interests.

2. Does this mean every BCBA must use AI in their practice?

No. The course argues for informed engagement, not mandatory adoption. A BCBA who evaluates specific AI tools, determines that they do not meet acceptable standards for their practice context, and documents that reasoning has fulfilled their ethical obligation. The concern is with practitioners who refuse to consider AI at all — who make blanket decisions based on discomfort or unfamiliarity rather than thoughtful evaluation. The diversity of ABA practice settings and client populations means that AI tools will be more relevant in some contexts than others. A solo practitioner serving a small caseload may find that AI tools add little value beyond what their current systems provide. A large organization managing hundreds of clients may find that AI-assisted documentation, data analysis, and workflow management produce significant improvements in efficiency and quality. The evaluation should be context-specific.

3. What specific AI applications are most relevant to ABA practice right now?

The most immediately relevant applications include documentation assistance (drafting session notes, progress reports, and treatment plan language that practitioners review and finalize), data visualization and trend analysis (supplementing visual analysis with algorithmic pattern detection), administrative efficiency (scheduling optimization, authorization tracking, and billing support), research synthesis (summarizing relevant literature to support evidence-based decision-making), and training content development (creating staff training materials, parent education resources, and social stories). These applications share a common characteristic: they handle tasks that are computationally intensive, repetitive, or organizational in nature, freeing practitioners to focus on the clinical judgment, interpersonal engagement, and ethical decision-making that require human expertise. The key is that each application is implemented with appropriate oversight and quality controls.

4. How do you balance the risks of AI adoption against the risks of AI avoidance?

The balance is achieved through a risk-stratified approach that matches the level of oversight and caution to the stakes involved. Low-risk applications (generating template language, brainstorming activity ideas, summarizing non-client-specific information) can be adopted with minimal safeguards. Moderate-risk applications (drafting client-specific documentation, analyzing clinical data) require practitioner review and approval before outputs are used clinically. High-risk applications (interpreting assessment data, recommending specific interventions) should be approached with extreme caution and may not be appropriate with current technology. This stratified approach addresses the risks of adoption (by implementing appropriate safeguards) while also addressing the risks of avoidance (by allowing beneficial tools to be used where they add genuine value). The goal is not zero risk — an impossible standard — but appropriately managed risk that reflects the ethical imperative to serve clients effectively.

5. What does the BACB Ethics Code say about staying current with technology?

The Ethics Code does not mention technology specifically, but Core Principle 4 on competence includes the obligation to 'keep current with the evolving field of behavior analysis.' As AI becomes increasingly relevant to healthcare and human services, staying current with AI developments arguably falls within this competence obligation. A practitioner who is completely unfamiliar with AI capabilities and limitations may be unable to make informed decisions about technology tools that affect their practice. Code Section 2.01 on evidence-based practice also applies: as evidence accumulates regarding the effectiveness of AI applications in ABA-related tasks, practitioners have an obligation to consider that evidence in their practice decisions. Ignoring an entire category of evidence because it relates to technology rather than clinical intervention would be inconsistent with the spirit of evidence-based practice.

6. How can BCBAs develop AI competence without becoming technology experts?

AI competence for BCBAs does not require technical expertise in machine learning or computer science. It requires enough conceptual understanding to make informed decisions about AI use in professional contexts. This includes understanding that AI generates outputs based on statistical patterns rather than reasoning or understanding, that AI can produce confidently stated but incorrect information, that AI outputs reflect biases present in training data, that different AI tools have different capabilities and limitations, and that appropriate use requires human oversight and clinical judgment. Practical competence can be developed by experimenting with AI tools using non-clinical content, attending professional development sessions on AI in healthcare, reading accessible summaries of AI applications in ABA and related fields, and consulting with colleagues who have experience with AI tools. The goal is informed consumer competence, not technical expertise.

7. What about practitioners who work in settings where AI use is prohibited by organizational policy?

Practitioners working under organizational AI prohibitions should understand the rationale behind the policy and evaluate whether it reflects thoughtful risk assessment or blanket risk avoidance. If the policy is well-reasoned — based on specific security concerns, regulatory constraints, or resource limitations — the practitioner can comply while advocating for periodic policy review as technology evolves. If the policy appears to reflect unfamiliarity or general resistance rather than specific analysis, the practitioner may have an ethical basis for advocating for a more nuanced approach. Advocacy might include presenting information about specific AI applications and their potential benefits, proposing a pilot program with defined safeguards and evaluation criteria, sharing examples from other ABA organizations that have implemented AI tools successfully, and offering to lead or participate in an organizational AI evaluation committee.

8. Could AI eventually replace behavior analysts?

Current AI technology cannot replace the clinical judgment, ethical reasoning, interpersonal engagement, and contextual understanding that behavior analysts bring to their work. AI lacks the ability to build therapeutic relationships, observe and interpret the nuances of client behavior in real time, make ethical judgments that balance competing interests, adapt creatively to unexpected situations, and exercise the kind of professional judgment that develops through supervised clinical experience. The more likely trajectory — consistent with patterns in other healthcare disciplines — is that AI will increasingly handle computational, organizational, and administrative tasks while practitioners focus on the clinical, ethical, and relational aspects of care that require human expertise. Practitioners who develop fluency with AI tools will be positioned to provide higher-quality services more efficiently, not to be replaced by the tools they use.

9. How should the field evaluate AI tools for use in ABA practice?

The field should apply the same evaluation standards to AI tools that it applies to any clinical tool or intervention: empirical evaluation of effectiveness, systematic assessment of risks and benefits, transparent reporting of outcomes, and peer review of claims. Specific evaluation criteria for AI tools in ABA should include accuracy of outputs in behavior analytic contexts, privacy and security protections for client data, transparency about how the tool processes information and generates outputs, impact on practitioner behavior including both positive effects and potential de-skilling, and client outcomes when AI-assisted services are compared to traditional services. Professional organizations including the BACB and ABAI should consider developing guidelines for AI evaluation that provide practitioners with a framework for making these assessments. Until such guidelines exist, individual practitioners and organizations must apply their professional judgment to evaluate AI tools against the general standards of evidence-based practice and client welfare.

10. What is the strongest counterargument to the 'ethics of inaction' position?

The strongest counterargument is that the evidence base for AI applications in ABA practice is still nascent, and that premature adoption of insufficiently validated tools could cause harm. This argument holds that the precautionary principle — the ethical obligation to avoid harm — should take precedence over the aspirational obligation to pursue improvements, particularly when the improvements are not yet well-established. This counterargument has merit, and the ethics of inaction position does not dismiss it. Rather, the position argues that the precautionary principle should be applied proportionally — with greater caution for higher-risk applications and greater openness for lower-risk applications — rather than used as a blanket justification for avoiding all AI engagement. The ethical resolution is not inaction or uncritical adoption but informed, stratified engagement that matches the level of caution to the level of risk.

FREE CEUs

Get CEUs on This Topic — Free

The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.

60+ on-demand CEUs (ethics, supervision, general)
New live CEU every Wednesday
Community of 500+ BCBAs
100% free to join
Join The ABA Clubhouse — Free →

Earn CEU Credit on This Topic

Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.

The Ethics of Inaction: Why NOT Using AI Could Violate Our Ethics Code — Adam Ventura · 1 BACB Ethics CEUs · $20

Take This Course →
📚 Browse All 60+ Free CEUs — ethics, supervision & clinical topics in The ABA Clubhouse

Research Explore the Evidence

We extended these answers with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.

Symptom Screening and Profile Matching

258 research articles with practitioner takeaways

View Research →

Brief Functional Analysis Methods

239 research articles with practitioner takeaways

View Research →

Social Communication Screening Tools

239 research articles with practitioner takeaways

View Research →

Related Topics

CEU Course: The Ethics of Inaction: Why NOT Using AI Could Violate Our Ethics Code

1 BACB Ethics CEUs · $20 · BehaviorLive

Guide: The Ethics of Inaction: Why NOT Using AI Could Violate Our Ethics Code — What Every BCBA Needs to Know

Research-backed educational guide with practice recommendations

Decision Guide: Comparing Approaches

Side-by-side comparison with clinical decision framework

CEU Buddy

No scramble. No surprises.

You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.

Upload a certificate, everything else is automatic Works with any ACE provider $7/mo to protect $1,000+ in earned CEUs
Try It Free for 30 Days →

No credit card required. Cancel anytime.

Clinical Disclaimer

All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.

60+ Free CEUs — ethics, supervision & clinical topics