Assessment & Research

Honoring Uncontrolled Events: Commentary on Slocum et al.

Horner et al. (2022) · Perspectives on Behavior Science 2022
★ The Verdict

Stick with concurrent multiple-baseline designs when you can—they still outrank nonconcurrent for clean proof.

✓ Read this if BCBAs writing theses, grants, or district evaluations who must defend their design choice.
✗ Skip if RBTs who only implement plans and never design studies.

01Research in Context

01

What this study did

Horner et al. (2022) wrote a short commentary on Slocum's team. The team had said nonconcurrent multiple-baseline studies can be trusted again. Horner's group says 'yes, but.'

They explain why concurrent designs still give stronger proof. They want BCBAs to pick the tougher design when time and resources allow.

02

What they found

The paper does not give new data. It gives a rule of thumb: use concurrent designs first. Use nonconcurrent only when you must.

They warn that real-life events—holidays, sick days, staff changes—can muddy nonconcurrent results.

03

How this fits with other research

Eckard et al. (2020) also urge BCBAs to stick with what they can see. Eckard says skip made-up clocks. Horner says skip made-up control.

Parsons et al. (2013) push for school-research teams. Horner's view backs them up: better designs plus better partnerships equals better evidence.

Reese (2001) shows debates recycle. Horner's stance is the latest loop in the long fight over what counts as solid proof in behavior analysis.

04

Why it matters

You may run a multiple-baseline study soon. If you can start all baselines on the same day, do it. You will block hidden events that could fake your effect. When logistics block you, Horner gives clear steps to keep nonconcurrent designs honest—space sessions close, watch for local news, and graph early to spot drift.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Open your last graph and check if baselines started at the same time—if not, note the risk in your report.

02At a glance

Intervention
not applicable
Design
theoretical
Finding
not reported

03Original abstract

In this special section of Perspectives on Behavior Science, Slocum et al. (2022) provide a summary of the logic and protocol for the construction, implementation, and analysis of single-case multiple-baseline designs. A major contribution of this article is a reassessment of the nonconcurrent multiple baseline design as a credible approach to documenting experimental control. In this commentary we provide considerations for readers as they approach the Slocum et al. article and suggest that although the resurrection of nonconcurrent multiple-baseline designs to a higher status is warranted, researchers will find more control for threats to internal validity in concurrent multiple-baseline designs, and the concurrent format should remain the preferred option.

Perspectives on Behavior Science, 2022 · doi:10.1007/s40614-022-00345-y