Response latency as a function of reinforcement schedule.
Thicker reinforcement schedules produce faster, more consistent response latencies.
01Research in Context
What this study did
Researchers looked at how often reinforcement is given changes how fast pigeons let go of a key. They used a single-case lab setup. The birds had to release a key quickly after a light came on.
Different schedules of food reward were tested. The team watched how long each bird waited before letting go.
What they found
More frequent food produced faster key releases. Latencies were also steadier, with less scatter between trials.
When rewards were thin, the birds hesitated longer and the times bounced around more.
How this fits with other research
Glover et al. (1976) built on this idea. They showed that after food, pause length and running rate are controlled by separate parts of the schedule. C et al. found latency changes; J et al. explained why pause and speed can move apart.
Blough (1992) later confirmed that probability of reward, not how big or long it lasts, drives reaction time. This sharpens the 1962 takeaway: frequency matters, size does not.
Gaucher et al. (2020) extended the same rule to children with autism. Under a DRL schedule, kids who could time their responses had higher IQ and language scores. The latency principle jumps from lab pigeons to clinic learners.
Why it matters
When you thin a schedule, expect brief spikes in latency or pause. Use dense reinforcement at first to lock in quick, consistent responses. Then fade slowly while watching both latency and post-reinforcement pause. If you run DRL for rate reduction, pre-check the learner's timing and language skills; quick adjustment is harder for kids still learning to wait.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Track response latency for the first five trials after each reinforcer; if it creeps up, add extra reinforcement before the next pause grows.
02At a glance
03Original abstract
Four Ss were trained to press and hold down a telegraph key in the presence of a light. Subsequent release of the key during a tone was followed by water reinforcement. The schedule of reinforcement for key release was varied, and its effects on the latency (RT) of key release to the tone were studied. Both median RT and variability of RT were found to be inversely related to frequency of reinforcement as determined by the schedule.
Journal of the experimental analysis of behavior, 1962 · doi:10.1901/jeab.1962.5-299