Assessment & Research

Statistical inference in behavior analysis: Experimental control is better.

Perone (1999) · The Behavior analyst 1999
★ The Verdict

Skip p-values in single-subject work—your eyes plus replication give stronger proof.

✓ Read this if BCBAs who run or review single-subject studies.
✗ Skip if BCBAs who only work with large group designs.

01Research in Context

01

What this study did

Bell (1999) wrote a short paper. It argued against using p-values in single-subject work.

The author said visual checks plus exact replications give clearer answers than stats.

No new data were collected; the piece is pure theory.

02

What they found

The paper claims that graphs you can see beat numbers you must trust.

If the line jumps right after you start treatment, that is enough proof.

Stats only add noise and can hide real effects or fake small ones.

03

How this fits with other research

Catania (2021) also doubts common lab rules. He shows that one reinforcer rarely boosts the exact same response next trial. Both papers push you to watch data, not assume.

Gallistel (2025) looks like a clash but is not. Gallistel wants new math for learning curves. Bell (1999) says skip math and look at the picture. The gap is method, not goal: both want clear proof.

Malone (1999) roots today’s view in Thorndike. Bell (1999) uses that same history to say visual control has always been the gold standard.

04

Why it matters

Next time you run an ABAB or multiple baseline, trust your eyes first. Plot the data, draw the phase lines, and ask: can I see the change? If yes, replicate. If not, tweak the program. Skip the t-test.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Graph your last client’s data, draw phase lines, and decide if you can see the effect before you open any stats software.

02At a glance

Intervention
not applicable
Design
theoretical
Finding
not reported

03Original abstract

Statistical inference promises automatic, objective, reliable assessments of data, independent of the skills or biases of the investigator, whereas the single-subject methods favored by behavior analysts often are said to rely too much on the investigator's subjective impressions, particularly in the visual analysis of data. In fact, conventional statistical methods are difficult to apply correctly, even by experts, and the underlying logic of null-hypothesis testing has drawn criticism since its inception. By comparison, single-subject methods foster direct, continuous interaction between investigator and subject and development of strong forms of experimental control that obviate the need for statistical inference. Treatment effects are demonstrated in experimental designs that incorporate replication within and between subjects, and the visual analysis of data is adequate when integrated into such designs. Thus, single-subject methods are ideal for shaping-and maintaining-the kind of experimental practices that will ensure the continued success of behavior analysis.

The Behavior analyst, 1999 · doi:10.1007/BF03391988