Assessment & Research

Examining and Enhancing the Methodological Quality of Nonconcurrent Multiple-Baseline Designs

Kratochwill et al. (2022) · Perspectives on Behavior Science 2022
★ The Verdict

Randomize start points and replicate tiers when you run nonconcurrent multiple-baseline designs.

✓ Read this if BCBAs who publish or supervise single-case studies in schools or clinics.
✗ Skip if Practitioners only running concurrent designs or not planning to publish.

01Research in Context

01

What this study did

Kratochwill et al. (2022) looked at nonconcurrent multiple-baseline designs. These are studies where each participant starts at a different time.

The authors wrote a how-to paper. They explained ways to make these designs stronger without running them all at once.

02

What they found

The paper says you can still guard against history effects. Just add random start points and repeat tiers.

They admit concurrent designs are safer. Yet with their tweaks, nonconcurrent ones can pass peer review.

03

How this fits with other research

Slocum et al. (2022) almost disagrees. They say nonconcurrent designs can match concurrent ones if you pile enough within-tier replications. Kratochwill adds random timing on top.

Petursdottir et al. (2018) came first. They mapped all validity threats for single-case work. Kratochwill zooms in on one threat and offers a fix.

Barnard‐Brak et al. (2021) tackles a different problem: inflated effect sizes. Both papers push the same message—tighten your method or your numbers will fool you.

04

Why it matters

School teams often can’t start everyone on the same day. If you must stagger, now you have a checklist: randomize start points, replicate tiers, and document each step. Reviewers will trust your data, and you can keep using practical start dates.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Write a random-order list of start days for your next staggered baseline and add one extra tier.

02At a glance

Intervention
not applicable
Design
methodology paper
Finding
not reported

03Original abstract

In this article we respond to the recent recommendation of Slocum et al. (2022), who provided conceptual and methodological recommendations for reconsidering the credibility and validity of the nonconcurrent multiple-baseline design. We build on these recommendations and offer replication and randomization upgrades that should further improve the status of the nonconcurrent version of the design in standards and single-case design research. Although we suggest that the nonconcurrent version should be an acceptable methodological option for single-case design researchers, the traditional concurrent multiple-baseline design should generally be the design of choice.

Perspectives on Behavior Science, 2022 · doi:10.1007/s40614-022-00341-2