Strategic and tactical limits of comparison studies.
Head-to-head studies leave you empty-handed; break treatments into parts and test the pieces instead.
01Research in Context
What this study did
The author looked at every head-to-head comparison study in behavior analysis. He asked one simple question: do these studies tell us how to make behavior change better?
He decided they do not. Instead, he said, they only tell us which package wins a race. They skip the step-by-step data we need to build better treatments.
What they found
Comparison studies give vague answers. You learn Treatment A beat Treatment B, but you still do not know which part worked or how strong each part was.
The paper calls this a tactical error. We trade rich, parametric data for a thin yes-or-no verdict.
How this fits with other research
Israel (1978) set the stage. That paper said most fights in our field are really theory-versus-technology mix-ups. Hinson (1988) keeps the same lens but aims it squarely at comparison designs.
Ivancic et al. (2019) updates the warning for today. They argue that rule-governed talk stops scientists from seeing what actually happens with clients. Both papers push the same fix: look at your data, not your labels.
Ferron et al. (2017) shows the positive path. They built masked visual analysis, a method that gives clear, step-by-step feedback without ever pitting two full treatments against each other. It is the kind of tool Hinson (1988) says we should chase.
Why it matters
Next time you plan a study or review one for supervision, skip the horse race. Ask what dose, what component, and what condition changes the curve. Swap comparison questions for parametric ones and you will leave each project with a recipe, not a trophy.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pick one active component in your current intervention and run a mini parametric test on it this week.
02At a glance
03Original abstract
A comparison study is an experiment whose primary purpose is to compare directly (regardless of experimental design) at least two different procedures for changing behavior or two or more components of such a procedure. This paper argues that, in spite of their popularity, such studies typically lead to inappropriate inferences with poor generality based on improper evidence gathered in support of the wrong question, thus wasting the limited experimental resources. The discussion considers problems concerning the functions of comparison studies, the nature of the comparisons that are attempted, the generality of their findings, and the limited role that they can play in technological research.
The Behavior analyst, 1988 · doi:10.1007/BF03392448