Starts in:

The Ethics of Inaction in ABA: Why Refusing Beneficial AI Tools May Violate the BACB Ethics Code

Source & Transformation

This guide draws in part from “The Ethics of Inaction: Why NOT Using AI Could Violate Our Ethics Code” by Adam Ventura, PhD BCBA (BehaviorLive), and extends it with peer-reviewed research from our library of 27,900+ ABA research articles. Citations, clinical framing, and cross-links below are synthesized by Behaviorist Book Club.

View the original presentation →
In This Guide
  1. Overview & Clinical Significance
  2. Background & Context
  3. Clinical Implications
  4. Ethical Considerations
  5. Assessment & Decision-Making
  6. What This Means for Your Practice

Overview & Clinical Significance

The ethical debate around artificial intelligence in applied behavior analysis has largely focused on the risks of adoption — concerns about client confidentiality, clinical accuracy, over-reliance on automated tools, and the potential erosion of professional competence. These concerns are legitimate and important. However, this course presents a complementary and equally important argument: the ethical risks of refusing to adopt beneficial technologies may be just as significant as the risks of adopting them irresponsibly.

The clinical significance of this argument centers on a straightforward observation: if AI tools can genuinely improve the quality, efficiency, or accessibility of ABA services — and emerging evidence suggests that they can in specific applications — then practitioners who categorically refuse to consider these tools may be denying their clients the benefits of available improvements. In a field defined by its commitment to data-driven practice and evidence-based intervention, blanket rejection of any tool based on unfamiliarity or philosophical discomfort rather than empirical evaluation is inconsistent with the profession's core values.

This course challenges behavior analysts to examine their technology-related decisions through the same ethical lens they apply to clinical decisions. Just as a practitioner would be expected to justify a decision not to use a well-supported intervention strategy, they should be prepared to justify a decision not to use technology tools that could improve client outcomes. This does not mean that every AI tool should be adopted uncritically — far from it. The argument is for informed engagement rather than reflexive avoidance: evaluate each tool on its merits, implement those that genuinely serve clients, and reject those that do not meet appropriate standards for safety, accuracy, and clinical value.

The presenter, Adam Ventura, frames the discussion around the concept of 'ethics of inaction' — the recognition that failing to act can be just as ethically problematic as acting wrongly. In the context of AI and ABA, inaction means continuing to rely exclusively on manual processes when automated alternatives could improve documentation quality, enhance data analysis, reduce wait times for services, or expand access to underserved populations. Each of these improvements has direct implications for client welfare, making the decision to forego them ethically relevant.

Your CEUs are scattered everywhere.Between what you earn here, your employer, conferences, and other providers — it adds up fast. Upload any certificate and just know where you stand.
Try Free for 30 Days

Background & Context

The concept of the ethics of inaction has deep roots in moral philosophy and professional ethics. In healthcare, the parallel is well-established: a physician who fails to prescribe a beneficial medication because they are unfamiliar with it is not acting in the patient's best interest, even though they have not actively caused harm. The same principle applies when behavior analysts fail to consider beneficial technologies that could improve the services they provide.

The current state of AI in ABA practice represents a critical inflection point. Early adopters have begun demonstrating concrete applications — AI-assisted session note generation that reduces documentation time by 30-50%, data visualization tools that identify trends more quickly than manual visual analysis, natural language processing systems that help clinicians interpret assessment data, and scheduling optimization algorithms that reduce gaps in service delivery. While the evidence base for these specific applications is still developing, the trajectory is clear: AI tools are becoming increasingly capable and increasingly relevant to behavior analytic practice.

The resistance to AI adoption within behavior analysis takes several forms. Some practitioners express principled concerns about privacy, accuracy, and the potential for over-reliance — concerns that deserve serious attention and that responsible AI adoption must address. Others, however, express a more categorical skepticism that frames AI as fundamentally incompatible with the individualized, relational nature of ABA practice. This latter position, the course argues, conflates the misuse of AI with AI itself and risks depriving clients of genuine benefits.

The broader healthcare landscape provides instructive parallels. Radiology, pathology, dermatology, and other medical specialties have integrated AI tools that augment clinical judgment while maintaining physician oversight. The pattern in these fields suggests that AI integration does not replace clinical expertise but enhances it — helping practitioners process information more efficiently, identify patterns they might otherwise miss, and allocate more time to the aspects of care that require human judgment and interpersonal engagement.

The BACB Ethics Code does not mention artificial intelligence specifically, but its principles create an implicit framework for evaluating technology adoption decisions. The emphasis on competence, evidence-based practice, client welfare, and professional responsibility all bear directly on how practitioners should approach emerging technologies that have the potential to improve service delivery.

Clinical Implications

The clinical case for AI engagement in ABA rests on several concrete applications where technology can demonstrably improve service delivery. Documentation efficiency represents perhaps the most immediately impactful application. BCBAs and behavior technicians spend substantial proportions of their working time on documentation tasks — session notes, progress reports, treatment plan updates, and authorization requests. AI tools that assist with these tasks can reduce documentation time significantly, freeing practitioners to spend more time in direct clinical activities, supervision, and professional development.

The implications for client access are also substantial. When practitioners are more efficient, they can serve more clients without sacrificing service quality. In a field that faces persistent workforce shortages and long wait lists for services, any technology that expands practitioner capacity without compromising care quality addresses a genuine ethical concern — the concern that individuals who need ABA services cannot access them because there are not enough practitioners to serve them.

Data analysis represents another clinically significant application. While visual analysis of graphed data remains a core behavior analytic skill, AI tools can supplement this skill by analyzing large datasets across multiple clients, identifying subtle trends that might be missed in routine visual inspection, and generating alerts when client progress deviates from expected patterns. For supervisors managing large caseloads, these tools provide an additional layer of quality assurance that enhances rather than replaces clinical judgment.

Assessment support is a third area of clinical relevance. AI can help organize and synthesize assessment data, generate preliminary reports that practitioners can review and refine, and identify patterns across assessment results that inform diagnostic and treatment decisions. These applications do not replace the practitioner's clinical judgment but provide a starting point that accelerates the assessment process and ensures that relevant information is not overlooked.

The course emphasizes an important distinction: AI should augment clinical judgment, not replace it. The most effective applications are those where AI handles the computational, organizational, and pattern-recognition tasks that it does well, while the practitioner retains responsibility for clinical interpretation, ethical decision-making, and the interpersonal aspects of care that require human engagement. This division of labor leverages the strengths of both human expertise and artificial intelligence.

Practitioners who refuse to engage with these tools may find themselves at a competitive disadvantage in terms of efficiency, but more importantly, they may be providing a lower standard of care than their AI-augmented peers — a gap that will likely widen as AI tools become more capable and more widely adopted.

FREE CEUs

Get CEUs on This Topic — Free

The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.

60+ on-demand CEUs (ethics, supervision, general)
New live CEU every Wednesday
Community of 500+ BCBAs
100% free to join
Join The ABA Clubhouse — Free →

Ethical Considerations

The ethical argument for AI engagement draws on several BACB Ethics Code provisions that are typically cited in support of AI caution but that cut in both directions. Code Section 2.01 on evidence-based practice requires practitioners to use the best available evidence to inform their work. If evidence demonstrates that specific AI applications improve documentation quality, enhance data analysis, or expand service access, then failing to consider these applications may itself represent a departure from evidence-based practice.

Core Principle 1 — benefiting those we serve — creates an affirmative obligation to pursue improvements in service quality and access. When AI tools offer genuine improvements in these areas, practitioners who refuse to consider them may be falling short of this affirmative obligation. The ethical standard is not merely to avoid harm but to actively seek better outcomes for clients. This aspirational dimension of the Ethics Code supports thoughtful technology adoption rather than technological conservatism.

Core Principle 4 — ensuring one's own competence — includes an obligation to stay current with developments relevant to professional practice. As AI becomes increasingly integrated into healthcare and human services, competence in evaluating and appropriately using AI tools becomes part of the professional skill set that the Ethics Code envisions. Practitioners who refuse to develop any understanding of AI may be allowing a competence gap that will increasingly affect their ability to serve clients effectively.

The ethical argument for AI engagement does not override the legitimate concerns about privacy, accuracy, and over-reliance. These concerns must be addressed through appropriate safeguards — HIPAA-compliant platforms, rigorous review of AI-generated content, and maintenance of independent clinical reasoning skills. The point is that these concerns can be addressed through careful implementation rather than categorical avoidance.

The ethics of inaction framework also applies to organizational decision-making. Organizations that prohibit all AI use without evaluating specific applications may be denying their practitioners access to tools that could improve service delivery. Ethical organizational leadership involves evaluating AI tools systematically, implementing those that meet safety and quality standards, and providing staff with the training needed to use approved tools effectively.

Practitioners should also consider the equity implications of technology adoption. If AI tools can reduce the cost of service delivery or expand service capacity, refusing to adopt them may disproportionately affect underserved populations who face the greatest barriers to accessing ABA services. The ethical dimension of technology adoption extends beyond individual practitioner decisions to encompass broader questions about service access and equity.

Assessment & Decision-Making

Evaluating whether a specific AI tool should be adopted requires a structured assessment framework that balances potential benefits against potential risks. The first assessment question is whether the tool addresses a genuine need — a documentation burden, a data analysis gap, a service access barrier, or another concrete challenge in the practitioner's or organization's workflow. AI tools adopted because they are trendy rather than because they solve real problems are unlikely to produce meaningful improvements.

The second assessment question involves accuracy and reliability. Practitioners should evaluate AI tools against established standards: Does the tool produce accurate outputs in the specific context of behavior analytic practice? What error rate is observed? How do errors manifest, and what are the consequences of those errors? Tools used for low-stakes applications (generating template language for common forms) require less stringent accuracy standards than tools used for high-stakes applications (interpreting assessment data or recommending interventions).

The third assessment question addresses privacy and security. Any AI tool that will process client information must meet applicable privacy standards, including HIPAA compliance where relevant. Practitioners should understand how the tool processes, stores, and uses input data, whether data is shared with third parties, and what protections exist against unauthorized access. Tools that cannot meet these standards should not be used with client data regardless of their clinical benefits.

The fourth question involves the practitioner's ability to evaluate outputs. A fundamental requirement for ethical AI use is that the practitioner can assess whether the AI's output is correct and appropriate. Using AI to generate content in areas outside the practitioner's competence — regardless of the tool's capability — violates competence boundaries. The practitioner must remain the quality control mechanism for all AI-generated clinical content.

Cost-benefit analysis should consider both direct and indirect factors. Direct benefits include time savings, improved accuracy, and expanded capacity. Indirect benefits include reduced practitioner burnout, improved documentation quality, and enhanced client outcomes. Direct costs include subscription fees and training time. Indirect costs include the risk of over-reliance, the need for ongoing quality monitoring, and the potential for workflow disruption during implementation.

A phased approach to AI adoption often works best: start with low-risk applications (administrative tasks, general content generation), evaluate outcomes, build organizational competence and comfort, and gradually expand to higher-value applications as confidence and safeguards develop.

What This Means for Your Practice

The ethics of inaction framework asks you to evaluate your current stance on AI with the same rigor you apply to any clinical decision. If you are currently avoiding AI entirely, ask yourself: Is this avoidance based on a careful evaluation of specific tools and their risks, or is it based on general discomfort with technology? If the latter, you may be allowing a bias to override your professional obligation to consider all available means of improving client care.

Begin with education. Develop a basic understanding of what generative AI can and cannot do, how it works at a conceptual level, and what applications are currently being used in healthcare and ABA practice. This education does not need to be technical — you do not need to understand neural network architectures — but you should understand enough to evaluate claims about AI capabilities and limitations critically.

Identify one or two low-risk applications in your current workflow where AI might add value. Common starting points include drafting template language for session notes, generating parent-friendly explanations of behavioral concepts, brainstorming reinforcement ideas, or summarizing research articles. Experiment with these applications using non-client-specific information, evaluate the quality of the outputs, and assess whether the tool saves time or improves quality.

Develop a personal or organizational AI policy that reflects thoughtful evaluation rather than blanket acceptance or rejection. This policy should specify which tools are approved for use, what types of information can be processed through AI systems, what review procedures must be followed for AI-generated content, and what training is required before staff use AI tools.

Stay engaged with the evolving conversation about AI in behavior analysis. The professional landscape is changing rapidly, and practitioners who remain informed about developments — including both the benefits and the risks — are best positioned to make ethical decisions about technology adoption. The goal is not to become an AI enthusiast but to become an informed, thoughtful consumer who can evaluate technology tools with the same rigor applied to any other aspect of clinical practice.

Ultimately, the ethical practitioner is neither the one who adopts every new technology uncritically nor the one who refuses to consider any technology on principle. The ethical practitioner evaluates each tool against the standards of client welfare, professional competence, and evidence-based practice, and makes decisions that serve the best interests of the individuals they serve.

Earn CEU Credit on This Topic

Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.

The Ethics of Inaction: Why NOT Using AI Could Violate Our Ethics Code — Adam Ventura · 1 BACB Ethics CEUs · $20

Take This Course →

Research Explore the Evidence

We extended this guide with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.

Symptom Screening and Profile Matching

258 research articles with practitioner takeaways

View Research →

Brief Functional Analysis Methods

239 research articles with practitioner takeaways

View Research →

Social Communication Screening Tools

239 research articles with practitioner takeaways

View Research →
CEU Buddy

No scramble. No surprises.

You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.

Upload a certificate, everything else is automatic Works with any ACE provider $7/mo to protect $1,000+ in earned CEUs
Try It Free for 30 Days →

No credit card required. Cancel anytime.

Clinical Disclaimer

All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.

60+ Free CEUs — ethics, supervision & clinical topics