The effect of changes in criterion value on differential reinforcement of low rate schedule performance.
Jump straight to the final DRL wait time—clients adapt faster than with gradual steps.
01Research in Context
What this study did
The team used a DRL schedule. Pigeons had to wait a set time between pecks. If they pecked too soon, no food came.
They then changed the wait rule. Sometimes they raised the wait time in tiny steps. Other times they jumped straight to the final number. They watched how the birds adjusted.
What they found
Big jumps in wait time caused less trouble. The birds learned the new rule faster. Tiny steps made the birds slip back into fast pecking.
How this fits with other research
Flory et al. (1974) first mapped how long pigeons wait under different DRL values. Moss et al. (2009) built on that map by asking what happens when the value moves mid-session.
Marcucella (1974) added a signal to DRL and saw fewer extra pecks. The 2009 study shows that even without a signal, a single big jump keeps responding tidy.
Nasr et al. (2000) and Pilgrim et al. (2000) found that fixed schedule parts protect behavior from disruption. Moss et al. (2009) echo this: a fixed, large jump is safer than many small wiggles.
Why it matters
When you shape longer wait times with DRL, skip the baby steps. Move to the final wait in one leap. Clients adjust faster and make fewer errors. Try it next time you teach a child to wait before pressing the switch again.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pick one high-rate behavior, set the DRL interval at the final goal today, and reinforce the first response after the full wait.
02At a glance
03Original abstract
The differential reinforcement of low rate (DRL) schedule is commonly used to assess impulsivity, hyperactivity, and the cognitive effects of pharmacological treatments on performance. A DRL schedule requires subjects to wait a certain minimum amount of time between successive responses to receive reinforcement. The DRL criterion value, which specifies the minimum wait time between responses, is often shifted towards increasingly longer values over the course of training. However, the process invoked by shifting DRL values is poorly understood. Experiment 1 compared performance on a DRL 30-s schedule versus a DRL 15-s schedule that was later shifted to a DRL 30-s schedule. Dependent measures assessing interresponse time (IRT) production and reward-earning efficiency showed significant detrimental effects following a DRL schedule transition in comparison with the performance on a maintained DRL 30-s schedule. Experiments 2a and 2b assessed the effects of small incremental changes vs. a sudden large shift in the DRL criterion on performance. The incremental changes produced little to no disruption in performance compared to a sudden large shift. The results indicate that the common practice of incrementing the DRL criterion over sessions may be an inefficient means of training stable DRL performance.
Journal of the experimental analysis of behavior, 2009 · doi:10.1901/jeab.2009.92-181