Practitioner Development

A critique of the usefulness of inferential statistics in applied behavior analysis.

Hopkins et al. (1998) · The Behavior analyst 1998
★ The Verdict

Inferential statistics are unnecessary—your graph already tells you if the intervention works.

✓ Read this if BCBAs who write reports with p-values or defend visual analysis to outside teams.
✗ Skip if RBTs who do not run data analysis or write summaries.

01Research in Context

01

What this study did

Bacon et al. (1998) wrote a think-piece, not an experiment. They looked at how behavior analysts use p-values and t-tests. They asked, 'Do we actually need these numbers to decide if a client is improving?'

The authors said no. They argued that visual analysis—looking at the graph—already gives a clear answer. Adding inferential statistics only clouds the picture.

02

What they found

The paper claims inferential statistics are redundant. Visual inspection of single-case graphs shows level, trend, and variability. That is enough to make a data-based decision.

Extra tests can mislead. A significant p-value might tempt you to trust a tiny, meaningless change. The authors urge BCBAs to stick with what they can see.

03

How this fits with other research

Branch (2019) backs the same anti-p-value view twenty-one years later. Branch says behavior-analytic replication habits can rescue mainstream science from its 'reproducibility crisis.'

Franck et al. (2019) and Killeen (2019) partly agree, but they offer replacements—Bayesian credible intervals and predictive metrics—instead of flat rejection. They update the debate with concrete tools.

Lyons (1995) and Powell et al. (2020) seem to contradict the target. They praise statistical process control charts for team decisions. The difference is small: control charts supplement visual trends, they do not override them, so the papers actually align on keeping the graph primary.

04

Why it matters

You already graph behavior every day. This paper gives you permission to trust your eyes. Skip the extra t-test homework. Instead, teach supervisors, parents, and teachers to read the same visual cues. You will save time and keep the focus on meaningful change, not on chasing p-values.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Remove any t-test or ANOVA line from your next progress report—let the graph speak.

02At a glance

Intervention
not applicable
Design
theoretical
Finding
not reported

03Original abstract

Researchers continue to recommend that applied behavior analysts use inferential statistics in making decisions about effects of independent variables on dependent variables. In many other approaches to behavioral science, inferential statistics are the primary means for deciding the importance of effects. Several possible uses of inferential statistics are considered. Rather than being an objective means for making decisions about effects, as is often claimed, inferential statistics are shown to be subjective. It is argued that the use of inferential statistics adds nothing to the complex and admittedly subjective nonstatistical methods that are often employed in applied behavior analysis. Attacks on inferential statistics that are being made, perhaps with increasing frequency, by those who are not behavior analysts, are discussed. These attackers are calling for banning the use of inferential statistics in research publications and commonly recommend that behavioral scientists should switch to using statistics aimed at interval estimation or the method of confidence intervals. Interval estimation is shown to be contrary to the fundamental assumption of behavior analysis that only individuals behave. It is recommended that authors who wish to publish the results of inferential statistics be asked to justify them as a means for helping us to identify any ways in which they may be useful.

The Behavior analyst, 1998 · doi:10.1007/BF03392787