Assessment & Research

Statistical inference in behavior analysis: Environmental determinants?

Ator (1999) · The Behavior analyst 1999
★ The Verdict

Clear visual control beats p-values in single-subject work, but keep a small stat tool in your back pocket for reviewers.

✓ Read this if BCBAs who write or review single-case studies.
✗ Skip if Group-design researchers who live on ANOVA.

01Research in Context

01

What this study did

Meyer (1999) wrote a position paper, not an experiment. The author asked one question: do single-subject studies need p-values?

The paper says no. If your reversal or multiple-baseline graph shows a clear jump, that is enough. You only learn stats so you can tell reviewers why the jump beats a t-test.

02

What they found

The finding is a stance, not a data set. Meyer (1999) claims visual analysis already gives the answer. Adding inferential stats is ‘usually superfluous.’

03

How this fits with other research

Manolov et al. (2025) is the direct successor. They built a free web app that computes NAP and other overlap numbers. Where Meyer (1999) said ‘skip the stats,’ the 2025 team says ‘use these quick numbers to make your visual story clearer.’ The field moved from ‘no stats’ to ‘helpful stats.’

Holburn (1997) is a close cousin. Both 1990s papers distrust mainstream statistics. Holburn (1997) pushes conditional probability instead of p-values, while Meyer (1999) pushes pure visual judgment. They agree: traditional tests miss behavioral contingencies.

Cullinan et al. (2001) shows the middle path. Their brief computer task uses simple counts of time allocation, not p-values, to spot reinforcer effects. It is the kind of low-inference quantification Meyer (1999) concedes can be useful.

04

Why it matters

You can stop apologizing for not running t-tests on your ABAB graph. When a journal reviewer demands stats, point to the stable baseline, the immediate shift, and the replicated return. If you want extra ammo, run the free NAP tool from Manolov et al. (2025) and drop one clean number next to your graph. You keep the visual story and give the reviewer a numeric crumb.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Open your last reversal graph and run the free NAP calculator from Manolov et al. (2025); paste the index under your figure before you submit.

02At a glance

Intervention
not applicable
Design
theoretical
Finding
not reported

03Original abstract

Use of inferential statistics should be based on the experimental question, the nature of the design, and the nature of the data. A hallmark of single-subject designs is that such statistics should not be required to determine whether the data answer the experimental question. Yet inferential statistics are being included more often in papers that purport to present data relevant to the behavior of individual organisms. The reasons for this too often seem to be extrinsic to the experimental analysis of behavior. They include lapses in experimental design and social pressure from colleagues who are unfamiliar with single-subject research. Regardless of whether inferential statistics are used, behavior analysts need to be sophisticated about experimental design and inferential statistics. Such sophistication not only will enhance design and analysis of behavioral experiments, but also will make behavior analysts more persuasive in presenting rationales for the use or nonuse of inferential statistics to the larger scientific community.

The Behavior analyst, 1999 · doi:10.1007/BF03391985