By Matt Harrington, BCBA · Behaviorist Book Club · Research-backed answers for behavior analysts
The most common biases include confirmatory bias (seeking evidence that supports existing hypotheses), pathology bias (over-interpreting behavior as symptomatic), hindsight bias (believing outcomes were predictable after learning them), misestimation of covariance (perceiving relationships between variables that do not actually covary), and anchoring bias (over-relying on initial information). These biases are universal features of human cognition and affect all practitioners regardless of training or experience. They are particularly dangerous in clinical settings because they can lead to ineffective treatment decisions.
Confirmatory bias in FBA manifests when a practitioner forms an early hypothesis about the function of a behavior and then selectively attends to observations that confirm that hypothesis while discounting contradictory evidence. For example, a practitioner who hypothesizes escape-maintained behavior may focus on instances where problem behavior occurs during demands while overlooking instances during free time. This can lead to incorrect function identification and ineffective intervention. Structured assessment protocols with standardized recording reduce this risk by ensuring systematic data collection across all conditions.
Yes, though adaptations may be necessary. Full experimental designs (reversal, multiple baseline) may not always be feasible due to ethical or practical constraints, but the core principles can be applied. Collecting stable baseline data before intervention, using repeated measurement during treatment, and implementing staggered starts across behaviors or settings provide meaningful experimental control. Even simple AB designs with repeated measurement are superior to uncontrolled clinical judgment. The key is adopting the mindset of systematic evaluation rather than viewing single-subject design as exclusively a research tool.
Pathology bias is the tendency to interpret ambiguous behavior in pathological or disordered terms. In ABA practice, this can manifest as interpreting developmentally typical behavior as problematic (for example, labeling normal toddler noncompliance as a behavioral disorder), over-identifying the need for treatment, or setting intervention goals that reflect the clinician's bias rather than the client's actual needs. Behavior analysts working primarily with clinical populations may develop skewed base rates that make pathology seem more prevalent than it is, reinforcing this bias.
Visual analysis provides a structured, criterion-based method for evaluating treatment effects that is less susceptible to bias than subjective impression. By examining level, trend, variability, immediacy of effect, overlap between phases, and consistency across similar phases, practitioners apply objective criteria rather than relying on their overall sense of whether things are improving. Visual analysis requires the practitioner to confront the full data set rather than selectively attending to favorable data points. This systematic approach constrains the influence of confirmatory and hindsight biases.
Misestimation of covariance occurs when a practitioner perceives a relationship between two variables that does not exist in the data. In functional assessment, this might mean concluding that a particular antecedent reliably evokes problem behavior based on a few memorable instances while overlooking many instances where the antecedent was present without problem behavior. Without systematic data collection that tracks both the presence and absence of the antecedent across occurrences and non-occurrences of the behavior, these illusory correlations can lead to incorrect functional hypotheses and misguided interventions.
Supervisors can model bias awareness by explicitly discussing potential biases when reviewing clinical cases, asking supervisees to generate alternative hypotheses, and requiring data-based justification for treatment decisions. Regular review of graphed data together, with discussion of what patterns are and are not supported by the data, helps supervisees develop visual analysis skills and recognize when their impressions diverge from the data. Supervisors can also have supervisees document their predictions before reviewing outcomes, which makes hindsight bias visible when predictions do not match results.
Hindsight bias is the tendency to believe, after learning an outcome, that one would have predicted it. In clinical practice, this is problematic because it reduces the perceived need for systematic assessment and data collection. If a practitioner believes they already knew that the intervention would work (or fail), they may see less value in the methodological safeguards that single-subject designs provide. This bias also distorts self-evaluation, leading practitioners to overestimate their judgment accuracy and underestimate the need for continued professional development in data-based decision-making.
Actively seeking disconfirming evidence is the most direct antidote to confirmatory bias. Instead of looking for data that support your hypothesis, you deliberately look for data that contradict it. In practice, this means asking questions like: Under what conditions does the behavior not occur even when the hypothesized antecedent is present? Are there occasions when the behavior occurs without the hypothesized reinforcer? If you cannot find disconfirming evidence despite genuine effort, your hypothesis is strengthened. If you find it easily, you need to revise your hypothesis.
The BACB Ethics Code (2022) contains several provisions requiring data-based decision-making. Code 2.18 requires continual evaluation of behavior-change programs using data. Code 2.01 requires providing effective treatment, which depends on accurate assessment. Code 3.01 requires that assessments be consistent with best available scientific evidence. Together, these codes establish that relying on subjective clinical judgment when systematic data-based methods are available falls short of ethical practice standards. Single-subject design provides the framework for meeting these obligations.
The ABA Clubhouse has 60+ on-demand CEUs including ethics, supervision, and clinical topics like this one. Plus a new live CEU every Wednesday.
Ready to go deeper? This course covers this topic with structured learning objectives and CEU credit.
Reducing Biases in Clinical Judgment with Single-Subject Treatment Design — CEUniverse · 1 BACB Ethics CEUs · $0
Take This Course →1 BACB Ethics CEUs · $0 · CEUniverse
Research-backed educational guide with practice recommendations
Side-by-side comparison with clinical decision framework
All behavior-analytic intervention is individualized. The information on this page is for educational purposes and does not constitute clinical advice. Treatment decisions should be informed by the best available published research, individualized assessment, and obtained with the informed consent of the client or their legal guardian. Behavior analysts are responsible for practicing within the boundaries of their competence and adhering to the BACB Ethics Code for Behavior Analysts.