Assessment & Research

Quantitative summaries of single-subject studies: What do group comparisons tell us about individual performances?

Baron et al. (2000) · The Behavior analyst 2000
★ The Verdict

Pooling single-case numbers creates a group average—always inspect the spread before buying the summary.

✓ Read this if BCBAs who write or consume meta-analyses of single-case data.
✗ Skip if Clinicians who only read single studies and never use pooled stats.

01Research in Context

01

What this study did

Nasr et al. (2000) wrote a think-piece, not an experiment. They looked at how we pool single-case graphs into one number. They asked: does that average still speak for each person?

The authors warn that any meta-average is really a group comparison. Group stats can hide the very individual patterns we care about.

02

What they found

The paper finds no new data. Instead it flags a trap: when you merge several AB designs, the pooled score may be driven by one outlier. You might then say the “average client” improved when, in truth, only one did.

They urge readers to treat pooled effect sizes as group data and check for confounds before trusting outliers.

03

How this fits with other research

Reid et al. (1999) disagreed one year earlier. That team said you can safely average single-case studies as long as you don’t call it classic meta-analysis. Nasr et al. (2000) answer: “Fine, but it is still a group comparison—watch out.”

Gaily et al. (1998) had already endorsed PND-based meta-analysis. Nasr et al. (2000) imply that such averages risk the same confound they describe.

Carter (2013) later reframed the fight. He said overlap indices are okay if you use them to judge experimental control, not size of effect. This extends A et al.’s caution by showing a safe narrow use for the numbers.

DeHart et al. (2019) offered a newer path: mixed-effects models keep each client’s data separate while still giving inferential stats. Their method sidesteps the group-comparison trap that A et al. warned about.

04

Why it matters

Next time you read a review claiming “85 % improvement across 30 single-case studies,” pause. Open the forest plot, scan for one huge bar, and ask: is the average lying? If you run your own literature review, show individual and pooled results side-by-side. That quick check keeps your clinical decisions tied to real clients, not phantom averages.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Pull the last review you cited, replot the effect sizes as a line graph, and see if one study drags the average.

02At a glance

Intervention
not applicable
Design
theoretical
Finding
not reported

03Original abstract

Kollins, Newland, and Critchfield (1999) responded to our comments about their review by arguing that their quantitative summary was not a meta-analysis and should not be criticized in these terms. We reply that regardless of what they call their review, it included confounding effects that make interpretations of the results problematic. Kollins et al. also argued that unexpected findings of the sort they reported can serve as a spur for further research. We reply that the understanding of findings that deviate from existing knowledge may well require empirical investigation. Such endeavors, however, should begin with an evaluation of the review procedures that suggested the existence of the differences. Finally, we emphasize that quantitative summaries of individual data are, in the end, a form of group comparison. The implications of using group methods to clarify individual data deserve frank recognition in discussions of the outcomes.

The Behavior analyst, 2000 · doi:10.1007/BF03392004