The effects of delayed reinforcement on variability and repetition of response sequences.
A short wait before reinforcement can accidentally loosen rigid response sequences.
01Research in Context
What this study did
Lecavalier et al. (2006) worked with pigeons in a lab.
Birds pecked four keys in a row to earn food.
One schedule paid for the same sequence every time.
Another schedule paid for different sequences each time.
The team then waited a few seconds before delivering the grain.
They watched how the pause changed the birds’ patterns.
What they found
Delays made the repetitive sequences less steady.
The birds began to mix up their order of pecks.
When the schedule already wanted variety, the delay did nothing.
The effect only showed up when the task asked for sameness.
How this fits with other research
Lord et al. (1986) showed that unpredictable delays keep responding stronger.
Lecavalier et al. (2006) adds that delay also nudges birds away from rigid repetition.
Sumter et al. (2020) moved the delay idea into a clinical room.
They let children play with toys during a ten-minute wait after asking nicely.
Problem behavior stayed low without any extra fading steps.
Together the three studies say: delays change behavior, but the change depends on what the contingency already wants.
Why it matters
If you are teaching a child to follow the same steps every time, deliver the reinforcer right away.
Even a short wait can make the child vary the steps by accident.
When you want flexible thinking, a small delay is safe.
Match your timing to the goal and you will get the pattern you want.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Deliver the reinforcer within one second when you need the exact same chain every time.
02At a glance
03Original abstract
Four experiments examined the effects of delays to reinforcement on key peck sequences of pigeons maintained under multiple schedules of contingencies that produced variable or repetitive behavior. In Experiments 1, 2, and 4, in the repeat component only the sequence right-right-left-left earned food, and in the vary component four-response sequences different from the previous 10 earned food. Experiments 1 and 2 examined the effects of nonresetting and resetting delays to reinforcement, respectively. In Experiment 3, in the repeat component sequences had to be the same as one of the previous three, whereas in the vary component sequences had to be different from each of the previous three for food. Experiment 4 compared postreinforcer delays to prereinforcement delays. With immediate reinforcement sequences occurred at a similar rate in the two components, but were less variable in the repeat component. Delays to reinforcement decreased the rate of sequences similarly in both components, but affected variability differently. Variability increased in the repeat component, but was unaffected in the vary component. These effects occurred regardless of the manner in which the delay to reinforcement was programmed or the contingency used to generate repetitive behavior. Furthermore, the effects were unique to prereinforcement delays.
Journal of the experimental analysis of behavior, 2006 · doi:10.1901/jeab.2006.58-05