Meta-Analytic Methods to Detect Publication Bias in Behavior Science Research
Run the free R scripts to check any single-case meta-analysis for missing studies before you submit.
01Research in Context
What this study did
Dowdy et al. (2022) wrote a how-to guide. They give step-by-step R code for spotting publication bias in single-case meta-analyses.
The paper shows two tricks: funnel plots and p-curves. Both work even when you have no grey literature.
What they found
This is a methods paper. There are no new data, just free scripts and worked examples.
How this fits with other research
Kaur et al. (2025) mapped 76 consecutive controlled case-series studies. These are exactly the kind of small-N papers that could hide bias. You can run Dowdy’s R code on that list.
Weber et al. (2024) looked back at 20 years of clinical functional analyses. Their big chart of undifferentiated vs. differentiated outcomes is a perfect test case for the funnel-plot tool.
Wallace et al. (2010) counted 174 RFT articles and found most used neurotypical adults. A p-curve on that set might show selective reporting of “sameness” frames. Dowdy’s scripts let you check.
Why it matters
Before you write your next single-case meta, paste your effect sizes into Dowdy’s R file. If the funnel plot looks lopsided, hunt for missing studies or adjust your search. It takes five minutes and saves you from publishing a biased review.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Download the R code, open your last literature search, and make one funnel plot.
02At a glance
03Original abstract
Publication bias is an issue of great concern across a range of scientific fields. Although less documented in the behavior science fields, there is a need to explore viable methods for evaluating publication bias, in particular for studies based on single-case experimental design logic. Although publication bias is often detected by examining differences between meta-analytic effect sizes for published and grey studies, difficulties identifying the extent of grey studies within a particular research corpus present several challenges. We describe in this article several meta-analytic techniques for examining publication bias when published and grey literature are available as well as alternative meta-analytic techniques when grey literature is inaccessible. Although the majority of these methods have primarily been applied to meta-analyses of group design studies, our aim is to provide preliminary guidance for behavior scientists who might use or adapt these techniques for evaluating publication bias. We provide sample data sets and R scripts to follow along with the statistical analysis in hope that an increased understanding of publication bias and respective techniques will help researchers understand the extent to which it is a problem in behavior science research.
Perspectives on Behavior Science, 2022 · doi:10.1007/s40614-021-00303-0