Reconsidering overlap-based measures for quantitative synthesis of single-subject data: what they tell us and what they don't.
Overlap numbers show control, not size—treat them that way.
01Research in Context
What this study did
Carter (2013) wrote a think-piece, not an experiment.
He looked at overlap tools like PND and PEM.
These count how many treatment points sit outside the baseline range.
Critics say the tools miss big effects that still overlap.
Mark asked: what are we really measuring?
What they found
The paper says overlap indices are not broken.
They simply tell you if the baseline ‘box’ was cracked.
That is experimental control, not how large the change is.
To know size, you need a different ruler.
How this fits with other research
Gaily et al. (1998) and Reid et al. (1999) already said, “go ahead, average those graphs.”
Mark keeps their green light but adds a speed-limit sign: use overlap only for control.
Cohn et al. (2007) ran numbers and crowned IRD the best overlap index.
Mark nods at their winner yet warns it still can’t speak for magnitude.
Dodd (1984) showed the C statistic can lie when data line up straight.
Mark widens the warning to every overlap tool, giving the old critique a broader coat.
Why it matters
Next time you pool single-case graphs for a parent or an IEP team, report two lines.
Line one: “Control was strong; 90 % of points escaped baseline.”
Line two: “Size is still unknown; look at the actual level change.”
This split keeps your summary honest and avoids the ‘overlap means small’ trap.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Add a footnote to your next graph summary: ‘Overlap index = control only; see level change for size.’
02At a glance
03Original abstract
Overlap-based measures are increasingly applied in the synthesis of single-subject research. This article considers two criticisms of overlap-based metrics, specifically that they do not measure magnitude of effect and do not adequately correspond with visual analysis. It is argued that these criticisms are based on fundamental misconceptions regarding the nature of effect sizes and their appropriate interpretation in single-subject research. Suggestions for considerations in evaluating single-subject research studies are offered, including the need to separately consider experimental control and magnitude of effect.
Behavior modification, 2013 · doi:10.1177/0145445513476609