Assessment & Research

Single-case synthesis tools II: Comparing quantitative outcome measures.

Zimmerman et al. (2018) · Research in developmental disabilities 2018
★ The Verdict

Your meta-analysis can flip from "works" to "uncertain" just by changing the effect-size formula—always run sensitivity checks.

✓ Read this if BCBAs who write or consume single-case meta-analyses on sensory, communication, or self-regulation interventions.
✗ Skip if Practitioners who only collect data for clinical decisions and never synthesize across studies.

01Research in Context

01

What this study did

The authors ran the same set of sensory-based single-case studies through several common effect-size formulas.

They wanted to see if every formula would tell the same story about how well the interventions worked.

02

What they found

Different formulas gave different answers. One index could call an intervention "highly effective" while another called the same data "uncertain.

Because of this, the overall meta-analytic verdict on sensory interventions flipped depending on the metric chosen.

03

How this fits with other research

Aydin et al. (2022) extends this worry. Their new PCES index adds performance criteria and only weakly correlates with the older non-overlap formulas N et al. checked.

King et al. (2025) shows a parallel problem: correcting partial-interval recording for duration also swings meta-analytic results.

Together these papers warn that both the effect-size formula and the recording method can reverse your conclusion.

04

Why it matters

Before you tell a team an intervention is "evidence-based," run the numbers with more than one effect-size index. If the answers disagree, report the range and plan more direct replication with cleaner measurement.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Pick one of your recent SCED graphs and compute two different effect sizes; note any difference in interpretation.

02At a glance

Intervention
not applicable
Design
methodology paper
Population
not specified
Finding
inconclusive

03Original abstract

Varying methods for evaluating the outcomes of single case research designs (SCD) are currently used in reviews and meta-analyses of interventions. Quantitative effect size measures are often presented alongside visual analysis conclusions. Six measures across two classes-overlap measures (percentage non-overlapping data, improvement rate difference, and Tau) and parametric within-case effect sizes (standardized mean difference and log response ratio [increasing and decreasing])-were compared to determine if choice of synthesis method within and across classes impacts conclusions regarding effectiveness. The effectiveness of sensory-based interventions (SBI), a commonly used class of treatments for young children, was evaluated. Separately from evaluations of rigor and quality, authors evaluated behavior change between baseline and SBI conditions. SBI were unlikely to result in positive behavior change across all measures except IRD. However, subgroup analyses resulted in variable conclusions, indicating that the choice of measures for SCD meta-analyses can impact conclusions. Suggestions for using the log response ratio in SCD meta-analyses and considerations for understanding variability in SCD meta-analysis conclusions are discussed.

Research in developmental disabilities, 2018 · doi:10.1016/j.ridd.2018.02.001