Assessment & Research

Meta‐analyses and effect sizes in applied behavior analysis: A review and discussion

Dowdy et al. (2021) · Journal of Applied Behavior Analysis 2021
★ The Verdict

Meta-analysis of single-case designs is now common in ABA, but choose your effect-size index carefully and understand its assumptions.

✓ Read this if BCBAs who read, write, or supervise meta-analyses of single-case interventions.
✗ Skip if Practitioners who only run direct therapy and never interpret research syntheses.

01Research in Context

01

What this study did

Dowdy and colleagues walked readers through the growing pile of ABA meta-analyses that use single-case data. They did not run a new meta-analysis. Instead, they explained why each common effect-size index exists and when it can mislead you.

The paper is a roadmap. It compares Tau, Tau-U, PND, PEM, NAP, BC-SMD, and multilevel options. It flags assumptions you must check before you pick one.

02

What they found

No single index wins every race. Each has trade-offs between power, bias, and ease of use. The authors warn that many published meta-analyses ignore these trade-offs, so readers may over-trust small or shaky effects.

03

How this fits with other research

Wang et al. (2013) and Hutchins et al. (2020) show the upside: when you pick the right index, social-skills meta-analyses report strong, clear effects. Dowdy et al. (2021) says the clarity can vanish if you switch to a different index or skip assumption checks.

Costello et al. (2022) extends the warning. They ran head-to-head tests and found visual inspection alone often misses real effects that Tau z and RD catch. Their data back Dowdy’s advice to pair effect sizes with small-stat tests, not graphs alone.

Manolov et al. (2025) pushes further, giving step-by-step rules for choosing multilevel terms. Together, the four papers form a timeline: Dowdy sounds the alarm, Costello proves the risk, and Manolov hands you the fix.

04

Why it matters

If you read or write single-case meta-analyses, Dowdy et al. (2021) is your checklist. Before you believe a large Tau-U, ask if baseline trend was removed. Before you pool PEM across studies, check if baseline variance differs. Share this paper with your next literature-review team so everyone picks the same index up front. One concrete habit: add a short table that lists the index, its assumption test, and the result. Reviewers love it and you safeguard your conclusions.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Open your last single-case report and add one sentence that names the effect-size index and states its baseline-trend assumption.

02At a glance

Intervention
not applicable
Design
narrative review
Finding
not reported

03Original abstract

For more than four decades, researchers have used meta-analyses to synthesize data from multiple experimental studies often to draw conclusions that are not supported by individual studies. More recently, single-case experimental design (SCED) researchers have adopted meta-analysis techniques to answer research questions with data gleaned from SCED experiments. Meta-analyses enable researchers to answer questions regarding intervention efficacy, generality, and condition boundaries. Here we discuss meta-analysis techniques, the rationale for their adaptation with SCED studies, and current indices used to quantify the effect of SCED data in applied behavior analysis.

Journal of Applied Behavior Analysis, 2021 · doi:10.1002/jaba.862