Assessment & Research

A survey of publication practices of single‐case design researchers when treatments have small or large effects

Shadish et al. (2016) · Journal of Applied Behavior Analysis 2016
★ The Verdict

Single-case researchers admit they’d rather publish big effects—and some would trim data to get there.

✓ Read this if BCBAs who write, review, or rely on single-case research
✗ Skip if Practitioners who only read summary flyers and never open journal articles

01Research in Context

01

What this study did

Shadish and team emailed 166 single-case researchers. They asked one question: would you submit, recommend, or drop data to make a treatment look stronger?

The survey gave fake study summaries. Half showed a big effect. Half showed a tiny effect. Everyone picked what they would do next.

02

What they found

Three out of four researchers said they would rather submit the big-effect study. One in six admitted they would drop weak cases to inflate results.

Only 8 % said they would submit the small-effect study. The rest would shelve it or keep collecting data.

03

How this fits with other research

Falakfarsa et al. (2022) audited 215 BAP papers and found half leave out treatment integrity numbers. Shadish shows why: weak numbers never reach the editor’s desk.

Kranak et al. (2021) tracked real JABA submissions. Veteran authors were 2.5 times more likely to get accepted. Shadish hints that veterans may know how to polish big effects before they hit submit.

van der Miesen et al. (2024) meta-analysis found huge SIB reductions. Their shiny Tau-U = -0.90 may look extra shiny because small or messy cases stayed in the file drawer.

04

Why it matters

Your evidence base is tilted. Meta-analyses, clinical guidelines, and your own CEU courses mostly cite winners. Next time you search for “what works,” hunt for dissertations, conference posters, and grey literature too. If you run a single-case study, pre-register your protocol and publish every data path—even the flat ones. Our field grows faster when we see the whole picture, not just the highlight reel.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Add one negative or flat data path to your next team presentation to model full transparency

02At a glance

Intervention
not applicable
Design
survey
Population
not specified
Finding
not reported

03Original abstract

The published literature often underrepresents studies that do not find evidence for a treatment effect; this is often called publication bias. Literature reviews that fail to include such studies may overestimate the size of an effect. Only a few studies have examined publication bias in single-case design (SCD) research, but those studies suggest that publication bias may occur. This study surveyed SCD researchers about publication preferences in response to simulated SCD results that show a range of small to large effects. Results suggest that SCD researchers are more likely to submit manuscripts that show large effects for publication and are more likely to recommend acceptance of manuscripts that show large effects when they act as a reviewer. A nontrivial minority of SCD researchers (4% to 15%) would drop 1 or 2 cases from the study if the effect size is small and then submit for publication. This article ends with a discussion of implications for publication practices in SCD research.

Journal of Applied Behavior Analysis, 2016 · doi:10.1002/jaba.308