Quantifying the effects of the differential outcomes procedure in humans: A systematic review and a meta‐analysis
Pairing a unique reinforcer with each correct response speeds up learning for most clients.
01Research in Context
What this study did
McCormack et al. (2019) hunted every human experiment that used the differential outcomes procedure. They found 43 studies with both typical learners and clinical groups. Each study compared two setups: one where each correct answer got its own special reinforcer, and one where any correct answer got the same reinforcer. The team ran a meta-analysis to see which setup taught new skills faster and more accurately.
What they found
Across all 43 experiments, pairing unique reinforcers with specific correct answers produced medium-to-large gains in learning speed and accuracy. The boost was strongest for participants with developmental or learning disabilities. In plain words, telling a child “get the red card, hear a bell” and “get the blue card, get a sticker” beats saying “get any card, get a high-five” every time.
How this fits with other research
Kodak et al. (2003) also used differential reinforcement, but they focused on cutting problem behavior and raising compliance. Both papers show the same big idea: different consequences for different responses work better than one-size-fits-all.
Kuroda et al. (2020) looks like a contradiction at first. They gave animals surprise jackpot reinforcers and saw no extra benefit. McCormack’s review says unique, predictable reinforcers help humans learn. The gap is about timing: jackpots are random and rare, while differential outcomes are paired with specific responses every time. Predictability, not size, drives the effect.
Falligant et al. (2021) adds that feedback must be specific. Their staff trainees learned fastest when told exactly what they did right, matching McCormack’s point that clear stimulus-reinforcer links speed acquisition.
Why it matters
If you run discrimination programs, match each correct choice to its own reinforcer now. Use a sound for one answer and a snack for another, then switch only after the learner masters the set. This tiny tweak can cut your teaching time in half, especially for clients with autism or ID. Start with two clear reinforcers tomorrow and track accuracy for a week—you should see a jump by day three.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pick one two-choice program and assign a different reinforcer to each correct answer for one week.
02At a glance
03Original abstract
We present a systematic review and a meta-analysis comparing the differential outcomes procedure to a nondifferential outcomes procedure among clinical and nonclinical populations. Sixty distinct experiments were included in the systematic review, 43 of which were included in the meta-analysis. We calculated pooled effect sizes for accuracy (overall accuracy, test accuracy, transfer accuracy) and acquisition outcomes (latency, errors, and trials to mastery). The meta-analysis revealed significant medium-to-large effect sizes for all three accuracy measures (pooled effect size range, 0.57 to 1.30). We found relatively greater effect sizes among clinical populations (effect size = 1.04). The single-subject experimental literature included in the systematic review was consistent with the findings from the group studies, demonstrating improvements in accuracy and speed of learning for the majority of participants. Moderator and subgroup analyses suggest that discrimination difficulty may induce relatively larger differential outcomes effects. The results indicate that the differential outcomes procedure can be a valuable addition to reinforcement-based interventions.
Journal of Applied Behavior Analysis, 2019 · doi:10.1002/jaba.578