Examining and Enhancing the Methodological Quality of Nonconcurrent Multiple-Baseline Designs
Randomize start points and replicate tiers when you run nonconcurrent multiple-baseline designs.
01Research in Context
What this study did
Kratochwill et al. (2022) looked at nonconcurrent multiple-baseline designs. These are studies where each participant starts at a different time.
The authors wrote a how-to paper. They explained ways to make these designs stronger without running them all at once.
What they found
The paper says you can still guard against history effects. Just add random start points and repeat tiers.
They admit concurrent designs are safer. Yet with their tweaks, nonconcurrent ones can pass peer review.
How this fits with other research
Slocum et al. (2022) almost disagrees. They say nonconcurrent designs can match concurrent ones if you pile enough within-tier replications. Kratochwill adds random timing on top.
Petursdottir et al. (2018) came first. They mapped all validity threats for single-case work. Kratochwill zooms in on one threat and offers a fix.
Barnard‐Brak et al. (2021) tackles a different problem: inflated effect sizes. Both papers push the same message—tighten your method or your numbers will fool you.
Why it matters
School teams often can’t start everyone on the same day. If you must stagger, now you have a checklist: randomize start points, replicate tiers, and document each step. Reviewers will trust your data, and you can keep using practical start dates.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Write a random-order list of start days for your next staggered baseline and add one extra tier.
02At a glance
03Original abstract
In this article we respond to the recent recommendation of Slocum et al. (2022), who provided conceptual and methodological recommendations for reconsidering the credibility and validity of the nonconcurrent multiple-baseline design. We build on these recommendations and offer replication and randomization upgrades that should further improve the status of the nonconcurrent version of the design in standards and single-case design research. Although we suggest that the nonconcurrent version should be an acceptable methodological option for single-case design researchers, the traditional concurrent multiple-baseline design should generally be the design of choice.
Perspectives on Behavior Science, 2022 · doi:10.1007/s40614-022-00341-2