Starts in:

Decision-Making Algorithms in ABA: Ethical Considerations for Behavior Analysts

Source & Transformation

This guide draws in part from “Ethical Issues in Using Standardized Decision-making to Inform Professional Practice” by Matt Brodhead, Ph.D., BCBA-D (BehaviorLive), and extends it with peer-reviewed research from our library of 27,900+ ABA research articles. Citations, clinical framing, and cross-links below are synthesized by Behaviorist Book Club.

View the original presentation →
In This Guide
  1. Overview & Clinical Significance
  2. Background & Context
  3. Clinical Implications
  4. Ethical Considerations
  5. Assessment & Decision-Making
  6. What This Means for Your Practice

Overview & Clinical Significance

Decision-making algorithms (DMAs) have emerged as a significant area of conceptual and applied development within behavior analysis. Typically presented as flowcharts or decision trees, these tools guide practitioners through a structured series of questions to arrive at a recommended course of action. The appeal is obvious: standardized decision-making promises consistency across practitioners, reduced variability in clinical judgments, and a systematic pathway from assessment data to intervention selection. For organizations managing large caseloads with practitioners of varying experience levels, DMAs offer the possibility of ensuring that critical clinical decisions meet a minimum quality threshold.

However, the rapid adoption of decision-making algorithms in behavior analytic practice has outpaced the empirical evaluation of their validity, reliability, and ethical implications. This course, presented by Matt Brodhead, addresses that gap directly by examining outcomes from a series of empirical studies evaluating DMAs and surfacing the ethical concerns that practitioners and organizations must consider before implementing these tools. The clinical significance of this topic extends to every BCBA who uses or is asked to use algorithmic decision-making tools in their practice — which, given current trends in the field, is an increasingly large proportion of the workforce.

The stakes are considerable. When a DMA produces a recommendation, that recommendation influences the treatment a client receives. If the algorithm is poorly designed, insufficiently validated, or applied outside its intended scope, the resulting clinical decisions may be suboptimal or actively harmful. Conversely, if a well-designed DMA is implemented with appropriate training and oversight, it can genuinely improve decision-making quality across an organization. The ethical challenge lies in distinguishing between these scenarios and establishing the safeguards needed to ensure that algorithmic tools serve clients rather than merely streamlining organizational processes.

Behavior analysts are uniquely positioned to evaluate DMAs critically because the field's emphasis on empirical validation and functional analysis provides the conceptual tools needed to assess whether a given algorithm actually produces better outcomes than professional judgment alone. This course encourages practitioners to apply the same scientific rigor to evaluating decision-making tools that they would apply to evaluating any clinical intervention.

Your CEUs are scattered everywhere.Between what you earn here, your employer, conferences, and other providers — it adds up fast. Upload any certificate and just know where you stand.
Try Free for 30 Days

Background & Context

The development of decision-making algorithms in behavior analysis reflects broader trends in healthcare toward standardization and evidence-based practice guidelines. In medicine, clinical practice guidelines and decision support tools have been developed for decades, with a substantial literature examining both their benefits and limitations. Behavior analysis has been comparatively late to this conversation, but the field's recent growth — particularly in autism services — has created organizational pressures that make algorithmic decision-making increasingly attractive.

As ABA organizations scale, they face a fundamental tension: the need for individualized, function-based treatment planning versus the practical reality of managing hundreds or thousands of cases with finite supervisory resources. DMAs represent one attempt to resolve this tension by encoding expert decision-making into a replicable format that less experienced practitioners can follow. Examples include algorithms for selecting reinforcement strategies, determining when to modify intervention parameters, choosing between functional analysis methodologies, and deciding when to transition between treatment phases.

The conceptual development of these tools has been documented in the behavior analytic literature, but the empirical evaluation has lagged behind. Several published DMAs have been based on expert consensus or logical analysis rather than systematic outcome data. While these approaches have face validity, they do not meet the empirical standards that the field typically demands of its interventions. Matt Brodhead's research program has begun to address this gap by conducting controlled studies examining how practitioners interact with DMAs, whether DMAs produce the intended decisions, and what variables moderate their effectiveness.

The organizational context in which DMAs are deployed also matters significantly. In some settings, DMAs are presented as optional tools that supplement clinical judgment. In others, they function as mandatory protocols that practitioners are required to follow. These different implementation models carry very different ethical implications, particularly regarding practitioner autonomy and professional responsibility for clinical outcomes.

Clinical Implications

The clinical implications of decision-making algorithms in ABA practice are substantial and multifaceted. When DMAs function as intended, they can improve consistency in clinical decision-making, reduce errors attributable to cognitive biases, and ensure that important variables are considered systematically rather than overlooked. For newer practitioners who may lack the clinical experience to weigh multiple factors simultaneously, a well-designed DMA can serve as a scaffold that guides their reasoning process and highlights considerations they might otherwise miss.

However, research findings presented in this course reveal several concerns about how DMAs function in practice. One significant finding is that practitioners sometimes follow algorithmic recommendations even when those recommendations conflict with their own clinical judgment or with observable client data. This tendency toward algorithmic compliance raises questions about whether DMAs enhance clinical decision-making or potentially undermine it by discouraging critical thinking. When practitioners defer to an algorithm rather than integrating algorithmic output with their own assessment of the clinical situation, the result may be decisions that are technically consistent but clinically inappropriate for a specific client.

Another clinical implication concerns the scope of variables that DMAs can capture. By design, algorithms simplify complex decision spaces into a manageable sequence of binary or categorical questions. This simplification necessarily excludes some variables that may be clinically relevant. For example, a DMA designed to guide reinforcement selection may not account for cultural factors, family preferences, or contextual variables that a skilled clinician would incorporate into their reasoning. The practitioner who follows the DMA without considering these additional factors may arrive at a technically defensible but practically suboptimal recommendation.

Finally, the use of DMAs in supervision contexts deserves attention. When supervisors rely on algorithmic tools as the basis for their supervisory recommendations, there is a risk that the supervisory relationship becomes centered on algorithmic compliance rather than clinical reasoning development. Effective supervision should build the supervisee's capacity for independent clinical judgment, which may be undermined if the supervisory focus shifts to following decision trees rather than understanding the principles that underlie them.

FREE CEUs

Get CEUs on This Topic — Free

The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.

60+ on-demand CEUs (ethics, supervision, general)
New live CEU every Wednesday
Community of 500+ BCBAs
100% free to join
Join The ABA Clubhouse — Free →

Ethical Considerations

The ethical issues surrounding DMAs in behavior analysis are numerous and interconnected. The BACB Ethics Code provides the framework for analyzing these concerns, with several code elements being directly relevant.

First, the Ethics Code's requirement for evidence-based practice (Code 2.01) demands that behavior analysts use procedures that have been demonstrated effective through the peer-reviewed literature. When a DMA has not been empirically validated — or when it has been validated only in specific contexts — its use in clinical practice may not meet this standard. Practitioners and organizations should ask whether the DMA they are using has been subjected to empirical testing, whether the outcomes of that testing support its use, and whether the population and context in which it was tested are sufficiently similar to the population and context in which it is being applied.

Second, the Ethics Code's provisions regarding competence (Code 1.05) require that practitioners exercise independent professional judgment. If a DMA is implemented in a way that discourages or prohibits practitioners from deviating from algorithmic recommendations — even when their clinical judgment suggests a different course of action — the organization may be creating conditions that conflict with this ethical requirement. Practitioners have an obligation to exercise their professional judgment and cannot ethically delegate that responsibility to an algorithm.

Third, the development process for DMAs raises ethical questions about whose expertise is encoded in the algorithm and whether the development process adequately represents the diversity of clients, settings, and cultural contexts in which the tool will be used. An algorithm developed based on the clinical experience of a small group of experts working in a specific setting may not generalize well to other populations or contexts, and its use in those settings may produce systematically biased recommendations.

Fourth, informed consent requires that clients and their families understand how treatment decisions are being made. If algorithmic tools are playing a significant role in clinical decision-making, clients have a right to know this. The Ethics Code's emphasis on transparency in the therapeutic relationship extends to the tools and processes that inform treatment planning.

Organizations implementing DMAs also bear ethical responsibility for monitoring outcomes. If algorithmic recommendations are systematically producing suboptimal results for certain client populations or in certain contexts, the organization has an obligation to identify and correct these patterns.

Assessment & Decision-Making

Evaluating decision-making algorithms requires a systematic approach that mirrors the empirical standards behavior analysts apply to clinical interventions. Before adopting a DMA, practitioners and organizations should conduct a thorough assessment of the tool's psychometric properties, including its reliability (do different users arrive at the same decision given the same inputs?), validity (does following the algorithm produce better client outcomes than alternative decision-making approaches?), and sensitivity (does the algorithm appropriately differentiate between cases that require different courses of action?).

The assessment of DMA effectiveness should include both process measures and outcome measures. Process measures examine whether the DMA is being used as intended — whether practitioners are answering the algorithmic questions accurately, whether they are following the recommended decision pathway, and whether they are completing the algorithm in a reasonable timeframe. Outcome measures examine whether the decisions produced by the DMA lead to effective interventions — whether clients are making progress, whether treatment goals are being achieved, and whether adverse events are occurring at acceptable rates.

Practitioners should also assess the conditions under which they are being asked to use DMAs. Key questions include: Was the DMA developed for the population I serve? Has it been validated in settings similar to mine? Am I permitted to deviate from algorithmic recommendations when my clinical judgment suggests a different approach? Is there a mechanism for reporting concerns about algorithmic recommendations? These questions help practitioners determine whether a DMA can be used ethically in their specific context.

When evaluating the research supporting a particular DMA, practitioners should look for studies that include meaningful outcome data — not just agreement rates between the algorithm and expert judgment, but actual client outcomes resulting from algorithmically-guided decisions. Agreement with expert opinion is a necessary but not sufficient condition for establishing that a DMA produces good clinical outcomes, since expert opinion itself may be subject to biases and limitations.

What This Means for Your Practice

Decision-making algorithms are increasingly common in ABA organizations, and practitioners need a framework for evaluating and using these tools ethically. Before implementing or following a DMA, verify that it has been empirically validated for your population and setting — face validity and expert consensus alone are insufficient. Maintain your obligation to exercise independent professional judgment. An algorithm can inform your decision-making, but it cannot replace your professional responsibility for the clinical decisions you make. When a DMA recommendation conflicts with your clinical assessment, document your reasoning and discuss the discrepancy with your supervisor or colleagues rather than defaulting to either the algorithm or your initial judgment without critical analysis.

Organizations developing DMAs should invest in empirical validation before mandating their use. This includes testing whether the algorithm produces reliable decisions across users, whether those decisions lead to positive client outcomes, and whether the tool performs equitably across different client populations and cultural contexts. Organizations should also establish clear policies about when and how practitioners may deviate from algorithmic recommendations, ensuring that professional autonomy is preserved.

In supervision, use DMAs as teaching tools rather than compliance tools. The goal of supervision is to develop the supervisee's clinical reasoning capacity, and algorithmic tools can support this goal when used to structure discussions about decision-making processes rather than to dictate decisions. Ask supervisees to work through the algorithm and then explain the reasoning behind each step, identify variables the algorithm may not capture, and articulate when and why deviation from the recommendation might be appropriate.

Finally, contribute to the empirical literature on DMAs. The field needs more data on whether these tools actually improve outcomes, and practitioners who use DMAs in their daily practice are well-positioned to collect and report that data. Single-subject designs comparing outcomes under algorithmic versus standard decision-making conditions would be particularly valuable.

Earn CEU Credit on This Topic

Ready to go deeper? This course covers this topic in detail with structured learning objectives and CEU credit.

Ethical Issues in Using Standardized Decision-making to Inform Professional Practice — Matt Brodhead · 1 BACB Ethics CEUs · $25

Take This Course →

Research Explore the Evidence

We extended this guide with research from our library — dig into the peer-reviewed studies behind the topic, in plain-English summaries written for BCBAs.

Symptom Screening and Profile Matching

258 research articles with practitioner takeaways

View Research →

Brief Behavior Assessment and Treatment Matching

252 research articles with practitioner takeaways

View Research →

Brief Functional Analysis Methods

239 research articles with practitioner takeaways

View Research →
CEU Buddy

No scramble. No surprises.

You earn CEUs from a dozen different places. Upload any certificate — from here, your employer, conferences, wherever — and always know exactly where you stand. Learning, Ethics, Supervision, all handled.

Upload a certificate, everything else is automatic Works with any ACE provider $7/mo to protect $1,000+ in earned CEUs
Try It Free for 30 Days →

No credit card required. Cancel anytime.

Clinical Disclaimer

All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.

60+ Free CEUs — ethics, supervision & clinical topics