A comparative analysis of experimental designs for procedural fidelity investigations
Unsignaled rapid alternations in multielement designs can hide the real impact of fidelity errors.
01Research in Context
What this study did
The team tested two ways to study fidelity errors. One design switched conditions every 5 minutes without warning. The other gave a clear signal before each switch.
They used college students as participants. The task was simple: press a key to earn points. DRA at a large share fidelity gave points every time. DRA at a large share fidelity gave points half the time.
What they found
a large share fidelity beat a large share fidelity in both designs. Students pressed more when reinforcement was unpredictable.
The surprise came in the unsignaled rapid alternations. The gap between a large share and a large share shrank. The design itself hid part of the fidelity effect.
How this fits with other research
Lancioni et al. (2008) already showed that visual inspection can mislead. Their survey found low agreement when people eyeball graphs. Abuin et al. now add that the design itself can trick the eye.
Heinicke et al. (2012) proved high fidelity works in classrooms. Their review of 687 cases shows near-perfect success when teachers hit a large share fidelity. Abuin et al. do not contradict this. They simply warn that lab designs may understate how much fidelity matters.
Dutt et al. (2019) built a scale to measure teacher fidelity. Abuin et al. remind us that even good tools can fail if the study design hides real differences.
Why it matters
When you run a multielement design to test fidelity, signal each condition clearly. Rapid, unsignaled switches can make small fidelity errors look harmless. Use slower alternations or clear signals so the data tell the true story.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Add a 5-second signal before each condition change in your next multielement probe.
02At a glance
03Original abstract
Differential reinforcement of alternative behavior (DRA) reduces challenging behavior and increases alternative responding when implemented as designed. Deviations from treatment protocols (i.e., fidelity errors) reduce the efficacy of DRA. To understand the effects of fidelity errors during DRA, researchers have used multielement and reversal designs but have not directly compared effects of fidelity errors across designs. The present experiments compared effects of fidelity errors on DRA using reversal and multielement designs in a translational arrangement. Twelve undergraduates experienced a computer program in which alternations between DRA with 100% fidelity (DRA 100%) and DRA with 50% fidelity (DRA 50%) occurred according to both multielement and reversal designs. Six participants experienced signaled conditions (Experiment 1), and six participants experienced unsignaled conditions (Experiment 2). Results replicated previous reduced-fidelity research in that more target responding occurred during DRA 50% relative to DRA 100%. This was true regardless of design type and presence of signals. However, when DRA 50% and DRA 100% were rapidly alternated without signals, participants engaged in less target responding during DRA 50% and more target responding during DRA 100%. Implications of the present experiments include considerations related to design selection and presence of signals within multielement designs during evaluations with procedural fidelity manipulations.
Journal of the Experimental Analysis of Behavior, 2026 · doi:10.1002/jeab.70097