Assessment & Research

Commentary on Slocum et al. (2022): Additional Considerations for Evaluating Experimental Control

Smith et al. (2022) · Perspectives on Behavior Science 2022
★ The Verdict

Nonconcurrent multiple baseline designs can be as valid as concurrent ones—check baseline quality and your audience first.

✓ Read this if BCBAs who run single-case studies in schools, clinics, or homes with tricky scheduling.
✗ Skip if Practitioners who only use group designs or have no say in study layout.

01Research in Context

01

What this study did

Smith et al. (2022) wrote a commentary. They backed Slocum et al. who say nonconcurrent multiple baseline designs can be strong.

The paper urges BCBAs to judge each design on its own logic, not just on whether phases run at the same time.

Authors also warn that some journals and reviewers still prefer fully concurrent designs, so know your audience.

02

What they found

The piece finds that solid baseline trends, clear changes, and staggered starts give good internal validity even when sessions are run weeks apart.

In short, timing gaps alone do not kill experimental control.

03

How this fits with other research

Lanovaz et al. (2019) asked a similar question. They showed that big, clear effects usually repeat without extra reversals. Both papers push the same theme: relax rigid rules when the data path is obvious.

Bigham et al. (2013) ran simulations and saw that stable A-B baselines give almost no false positives. Smith et al. echo this by stressing that steady baseline logic matters more than concurrent timing.

Suzuki et al. (2023) add another layer. Their work shows low baseline variability boosts N-of-1 trend detection. Smith’s advice pairs well: check baseline quality first, then decide if nonconcurrent staggering is safe.

04

Why it matters

You can run strong single-case studies across different schools or homes without forcing everyone to start on the same week. Judge the graph, not the calendar. If baselines are stable and effects jump out, you likely have control. Before you submit, peek at the journal’s past articles; some still favor concurrent looks.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Open your last MB graph and rate baseline stability, trend, and variability; if all look clean, nonconcurrent spacing is probably fine.

02At a glance

Intervention
not applicable
Design
theoretical
Finding
not reported

03Original abstract

In the target article, Slocum et al. (2022) suggested that nonconcurrent multiple baseline designs can provide internal validity comparable to concurrent multiple baseline designs. We provide further support for this assertion; however, we highlight additional considerations for determining the relative strength of each design. We advocate for a more nuanced approach to evaluating design strength and less reliance on strict adherence to a specific set of rules because the details of the design only matter insofar as they help researchers convince others that the results are valid and accurate. We provide further support for Slocum et al.’s argument by emphasizing the relatively low probability that within-tier comparisons would fail to identify confounds. We also extend this logic to suggest that staggering implementation of the independent variable across tiers may be an unnecessary design feature in certain cases. In addition, we provide an argument that nonconcurrent multiple baseline designs may provide verification within baseline logic contrary to arguments made by previous researchers. Despite our general support for Slocum et al.’s assertions and our advocacy for more nuanced approaches to determining the strength of experimental designs, we urge experimenters to consider the perspectives of researchers from other fields who may favor concurrent multiple-baseline designs and suggest that using concurrent multiple-baseline designs when feasible may foster dissemination of behavior analytic research.

Perspectives on Behavior Science, 2022 · doi:10.1007/s40614-022-00346-x