Assessment & Research

Experimental control in the adapted alternating treatments design: A review of procedures and outcomes

Cariveau et al. (2022) · Behavioral Interventions 2022
★ The Verdict

Half of published AATD studies skip the no-treatment control, so add one and graph it to meet basic quality rules.

✓ Read this if BCBAs who compare two interventions in one case.
✗ Skip if Clinicians who only run reversal or multiple baseline designs.

01Research in Context

01

What this study did

Cariveau et al. (2022) read every adapted alternating treatments design paper they could find. They looked at 27 journals to see how many used clear control steps.

AATD lets you test two teaching methods at once. You switch back and forth each day. The catch is you still need a no-treatment set to prove the teaching caused the gain.

02

What they found

Only half of the AATD studies showed any control data. Many graphs had odd jumps that looked like hidden practice effects.

In short, the field is publishing flashy comparisons without the safety check of a baseline.

03

How this fits with other research

Cariveau et al. (2021) asked a different question about the same design. They saw that most authors also skip the step of making sure the two skill sets are equally hard. Together the two reviews paint the same picture: we rush to compare but skip the groundwork.

McMillan (1973) warned about this fifty years ago. That paper told analysts to watch for sequence effects and to add extra checks. Cariveau’s numbers show we still have not fixed the problem.

Bergmann et al. (2023) looked at procedural fidelity in JABA and found the same gap. Studies describe what they did but rarely show proof it was done right. The three audits line up: we like new methods more than we like proving they worked.

04

Why it matters

If you run an AATD without a no-treatment control, you cannot be sure the child learned because of your method. Add one untaught target set and graph it. That single line turns a flashy comparison into real science.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Pick one target you will not teach yet. Track it across sessions and plot the data beside your two treatments.

02At a glance

Intervention
not applicable
Design
systematic review
Finding
not reported

03Original abstract

AbstractThe adapted alternating treatments design (AATD) is a single‐case experimental design (SCD) that allows for the comparison of two or more instructional procedures on the acquisition of non‐reversible behaviors. Recent descriptions of quality indicators and methodological practices (e.g., equating target sets) specific to the AATD may help guide researchers and clinicians interested in using this design, although additional descriptions of best practices are warranted. One area that has not been considered previously are methods to demonstrate experimental control in the AATD; a two‐step process that involves experimental procedures and outcomes of the study. The current review analyzed studies published using the AATD in 27 journals to describe researchers' use of methods that allow for the demonstration of experimental control (e.g., no‐treatment control condition or combined experimental designs) and, when present, whether participants' responding suggested that potential threats to internal validity were present. The current review found that authors arranged for some type of control procedure in just over half of the reviewed studies. These studies also commonly adhered to recommended practices by frequently assessing responding in the control condition; nevertheless, participant performance suggesting potential threats to internal validity were common. Recommended practices and areas for future research are considered.

Behavioral Interventions, 2022 · doi:10.1002/bin.1865