Concurrence on Nonconcurrence in Multiple-Baseline Designs: A Commentary on Slocum et al. (2022)
Nonconcurrent multiple-baseline designs are methodologically sound—use them when you can’t start all baselines together.
01Research in Context
What this study did
Ledford (2022) wrote a short paper that backs up Slocum and friends. The topic: nonconcurrent multiple-baseline designs.
These designs start baselines at different times for each learner. Critics say the timing gap hurts proof. Ledford says the gaps are fine if you plan them right.
What they found
The paper finds no fatal flaw in nonconcurrent baselines. Careful staggered starts, stable data, and clear rules still show control.
In short, you can trust the design when a true concurrent start is impossible.
How this fits with other research
Wolfe et al. (2019) gives you a step-by-step visual checklist for any multiple-baseline graph. Use their score sheet on nonconcurrent data and you get the same protection Ledford argues for.
Emerson et al. (2023) scanned 1,800 single-case studies and found most baseline data are not "nice and normal." Their point: judge baseline quality by stability and trend, not by a perfect bell curve. Ledford’s stance matches—timing gaps matter less than steady patterns.
Kodak et al. (2021) tells practitioners to run quick comparison tests and let the learner’s data pick the best method. Ledford extends that spirit: pick the design that fits real-life constraints, then let solid data do the talking.
Why it matters
You no longer need to delay a study until every participant can start baseline the same week. If you keep staggered starts rule-based and show stable paths, nonconcurrent multiple-baseline studies are publishable and ethical. Next time staffing or school calendars clash, use Ledford’s green light and run the design.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Start that new intervention on the only learner ready this week—just stagger the next learner later and keep baseline rules tight.
02At a glance
03Original abstract
Slocum et al. (this issue) provide well-reasoned arguments for the use of nonconcurrent multiple baseline designs in behavior analytic work, despite historical preference for concurrent designs (i.e., simultaneous baseline initiation) and contemporary guidelines in related fields suggesting that nonconcurrent designs are insufficient for evaluating functional relations (What Works Clearinghouse, 2020). I provide a commentary, highlighting major contributions of this article and suggesting areas of further consideration. In sum, I agree with authors that researchers should avoid wholesale dismissal of nonconcurrent designs. I also agree that understanding how multiple-baseline designs control for and allow for detection of threats to internal validity is critical so that authors can apply the variation of the design that allows them to draw confident conclusions about relations between independent and dependent variables.
Perspectives on Behavior Science, 2022 · doi:10.1007/s40614-022-00342-1