A survey of publication practices of single‐case design researchers when treatments have small or large effects
Single-case researchers admit they’d rather publish big effects—and some would trim data to get there.
01Research in Context
What this study did
Shadish and team emailed 166 single-case researchers. They asked one question: would you submit, recommend, or drop data to make a treatment look stronger?
The survey gave fake study summaries. Half showed a big effect. Half showed a tiny effect. Everyone picked what they would do next.
What they found
Three out of four researchers said they would rather submit the big-effect study. One in six admitted they would drop weak cases to inflate results.
Only 8 % said they would submit the small-effect study. The rest would shelve it or keep collecting data.
How this fits with other research
Falakfarsa et al. (2022) audited 215 BAP papers and found half leave out treatment integrity numbers. Shadish shows why: weak numbers never reach the editor’s desk.
Kranak et al. (2021) tracked real JABA submissions. Veteran authors were 2.5 times more likely to get accepted. Shadish hints that veterans may know how to polish big effects before they hit submit.
van der Miesen et al. (2024) meta-analysis found huge SIB reductions. Their shiny Tau-U = -0.90 may look extra shiny because small or messy cases stayed in the file drawer.
Why it matters
Your evidence base is tilted. Meta-analyses, clinical guidelines, and your own CEU courses mostly cite winners. Next time you search for “what works,” hunt for dissertations, conference posters, and grey literature too. If you run a single-case study, pre-register your protocol and publish every data path—even the flat ones. Our field grows faster when we see the whole picture, not just the highlight reel.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Add one negative or flat data path to your next team presentation to model full transparency
02At a glance
03Original abstract
The published literature often underrepresents studies that do not find evidence for a treatment effect; this is often called publication bias. Literature reviews that fail to include such studies may overestimate the size of an effect. Only a few studies have examined publication bias in single-case design (SCD) research, but those studies suggest that publication bias may occur. This study surveyed SCD researchers about publication preferences in response to simulated SCD results that show a range of small to large effects. Results suggest that SCD researchers are more likely to submit manuscripts that show large effects for publication and are more likely to recommend acceptance of manuscripts that show large effects when they act as a reviewer. A nontrivial minority of SCD researchers (4% to 15%) would drop 1 or 2 cases from the study if the effect size is small and then submit for publication. This article ends with a discussion of implications for publication practices in SCD research.
Journal of Applied Behavior Analysis, 2016 · doi:10.1002/jaba.308