Statistical inference in behavior analysis: Having my cake and eating it?
Add a quick nonparametric test to your visual inspection—eyes plus stats beat eyes alone.
01Research in Context
What this study did
Bell (1999) wrote a think-piece, not an experiment.
He asked one question: can we keep behavior analysis pure while still using simple stats?
His answer was yes—swap eyeball rules for easy nonparametric tests like randomization checks.
What they found
The paper finds no new data.
It argues that quick, assumption-free stats give clearer yes/no answers than staring at graphs.
That clarity, he says, helps both researchers and journal reviewers.
How this fits with other research
Branch (1999), printed the same year, says the opposite: drop p-values, trust replication instead.
The clash looks huge, but the two papers talk past each other. M wants better tools; N wants a new rulebook.
Taylor et al. (2022) later tested the idea. Objective methods matched visual calls about 84% of the time, showing stats can back up, not replace, your eyes.
Lancioni et al. (2008) gave the reason: different people looking at the same FA graph often disagree, so a numeric backup is wise.
Why it matters
Next time you run a single-case study, add one small nonparametric test to your visual check. Free online calculators make it a five-minute step that shields your decision from “it looks flat to me” critiques and meets today’s publication bar.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pick one online randomization test, run it on your last AB graph, and compare the p-value to what you thought you saw.
02At a glance
03Original abstract
Using simple, nonparametric statistical procedures can formalize the process of letting data speak for themselves, and can eliminate the gratuitous dismissal of deviant data from subjects or conditions. These procedures can act as useful discriminative stimuli, both for behavior analysts and for those from other areas of psychology who occasionally sample our journals. I also argue that changes in publication policies must change if behavior analysts are to accurately discriminate between real, reliable effects (hits) and false alarms.
The Behavior analyst, 1999 · doi:10.1007/BF03391986