Response acquisition with delayed reinforcement: a comparison of two-lever procedures.
New behavior can be acquired with delays up to 32 s if the contingency is made obvious.
01Research in Context
What this study did
Scientists worked with rats to test how long a delay can be and still teach a new lever press.
They used two levers. Pressing one started a timer. Food arrived after 0, 8, 16, 32, or 64 s.
A second lever let the rat cancel the waiting food. This made the delay contingency clearer.
What they found
Delays up to 32 s still produced new lever pressing.
The cancellation lever helped most at 16–32 s. At 64 s learning was spotty.
How this fits with other research
Coe et al. (1997) showed the year before that rats can learn the same response under a 30 s delay even without magazine training. G et al. added the cancellation lever and pushed the delay to 64 s.
Farrant et al. (1998), published the same year, found that adding a 12 s delay flattens rate differences caused by rich versus lean schedules. Together the two 1998 papers map how both delay length and schedule richness shape early acquisition.
Keely et al. (2007) later showed that rats still know which lever caused the food after long unsignaled delays. Their tracking data back up the 1998 finding: the contingency stays visible to the animal even when food is late.
Why it matters
You now have a lab-tested range for delayed reinforcement: up to about 30 s can still build new behavior. If you must use delays in practice, add a clear cancellation signal or alternate response. This makes the contingency stand out, just like the second lever did for the rats. Try it when shaping manding or teaching a new leisure skill where immediate reward is hard.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Insert a brief 5–10 s delay before delivering a token, and let the learner cancel the token by touching a red card to see if responding sharpens.
02At a glance
03Original abstract
Groups of 8 experimentally naive rats were exposed during 8-hr sessions to resetting delay procedures in which responses on one lever (the reinforcement lever) produced water after a delay of 8, 16, 32, or 64 s. For rats in one condition, responses on a second (no-consequences) lever had no programmed consequences. For rats in another condition, responses on a second (cancellation) lever during a delay initiated by a response on the reinforcement lever prevented delivery of the scheduled reinforcer; responses on the cancellation lever at other times had no programmed consequences. Under both conditions and at all delays, most subjects emitted more responses on the reinforcement lever than did control rats that never received water emitted on either lever. At 8-s delays, both conditions engendered substantially more responding on the reinforcement lever than on the other lever, and performance closely resembled that of immediate-reinforcement controls. At delays of 16 and 32 s, however, there was clear differential responding on the two levers under the cancellation condition but not under the other condition. When the delay was 64 s, differential responding on the two levers did not occur consistently under either condition. These findings provide strong evidence that the behavior of rats is sensitive to consequences delayed by 8, 16, and 32 s, but only equivocal evidence of such sensitivity to consequences delayed 64 s. They also indicate that acquisition depends, in part, on the measure of performance used to index it.
Journal of the experimental analysis of behavior, 1998 · doi:10.1901/jeab.1998.69-17