Methodological quality of meta-analyses of single-case experimental studies.
Most single-case meta-analyses still skip bias checks and use shaky math—demand better before you bank on them.
01Research in Context
What this study did
Jamshidi et al. (2018) looked at every meta-analysis that pulls together single-case experiments. They wanted to know how well these reviews follow quality rules.
The team used a 37-item checklist called R-AMSTAR. They scored 30 years of reviews on things like bias checks and how they combined numbers.
What they found
Quality has gone up since the 1980s, but most reviews still miss key steps. Over half skip tests for publication bias.
Many also use old ways to pool data that can give shaky answers. The average score sits below the halfway mark on the quality scale.
How this fits with other research
Baek et al. (2023) gives you the fix. Their multilevel-modeling guide shows step-by-step code for safer number pooling. This new method replaces the shaky synthesis tricks that Laleh flagged.
Moeyaert et al. (2023) adds the next chapter. They scanned 60 newer SCED meta-analyses and found most still use only 3-4 kid details and lose 15-30 % of them. The quality problem is still live.
Tanious (2022) and Elliffe et al. (2019) offer sharper single-case stats. Better randomization and trend tests can feed cleaner data into future meta-analyses, lifting the overall bar.
Why it matters
Before you trust or cite a SCED meta-analysis, flip to the methods page. If you see no funnel plot, Egger test, or multilevel model, treat the claim as weak. Ask authors for the bias check or run it yourself if data are shared. Push journals to demand these steps at review time.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Open the last SCED meta-analysis you bookmarked—check if it reports a publication-bias test; if not, flag it as low trust.
02At a glance
03Original abstract
BACKGROUND: Methodological rigor is a fundamental factor in the validity and credibility of the results of a meta-analysis. AIM: Following an increasing interest in single-case experimental design (SCED) meta-analyses, the current study investigates the methodological quality of SCED meta-analyses. METHODS AND PROCEDURES: We assessed the methodological quality of 178 SCED meta-analyses published between 1985 and 2015 through the modified Revised-Assessment of Multiple Systematic Reviews (R-AMSTAR) checklist. OUTCOMES AND RESULTS: The main finding of the current review is that the methodological quality of the SCED meta-analyses has increased over time, but is still low according to the R-AMSTAR checklist. A remarkable percentage of the studies (93.80% of the included SCED meta-analyses) did not even reach the midpoint score (22, on a scale of 0-44). The mean and median methodological quality scores were 15.57 and 16, respectively. Relatively high scores were observed for "providing the characteristics of the included studies" and "doing comprehensive literature search". The key areas of deficiency were "reporting an assessment of the likelihood of publication bias" and "using the methods appropriately to combine the findings of studies". CONCLUSIONS AND IMPLICATIONS: Although the results of the current review reveal that the methodological quality of the SCED meta-analyses has increased over time, still more efforts are needed to improve their methodological quality.
Research in developmental disabilities, 2018 · doi:10.1016/j.ridd.2017.12.016