Aftereffects of reinforcement on variable-ratio schedules.
Reinforcer size and ratio size control pause and run rate separately—track both when thinning schedules.
01Research in Context
What this study did
Glover et al. (1976) worked with pigeons on variable-ratio (VR) schedules. They changed two things: how big the ratio was and how rich the grain payoff was.
Birds pecked a key for food. The team timed the pause right after each payoff and the steady run rate that followed.
What they found
Bigger VR sizes and richer grain both made the post-reinforcement pause longer. Yet richer payoff lowered the steady run rate.
Pause and run rate did not move together. One can grow while the other shrinks.
How this fits with other research
BURNSTEIN et al. (1964) saw the same split earlier with fixed ratios in rats. Their work set the stage for J’s VR test.
Bromley et al. (1998) later reframed the idea in economic terms. With monkeys and cocaine, they showed "unit price" predicts consumption, echoing the 1976 pause/run split.
Cao et al. (2026) now supersedes the single pause measure. Their PERCS model tracks five parts of persistence, not just pause length. The 1976 finding becomes one piece of a bigger puzzle.
Why it matters
When you thin a VR schedule, watch two numbers: the client’s pause after reward and the speed of responding that follows. If pause grows but speed drops, you may be thinning too fast or the payoff is too big. Track both, then adjust one at a time.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Time the client’s pause after each reward and count responses per minute—if pause grows but rate falls, try a smaller reinforcer or lower ratio.
02At a glance
03Original abstract
On each of variable-ratio 10, 40, and 80 schedules of reinforcement, when rats' lever-pressing rates were stable, the concentration of a liquid reinforcer was varied within sessions. The duration of the postreinforcement pause was an increasing function of the reinforcer concentration, this effect being more marked the higher the schedule parameter. The running rate, calculated by excluding the postreinforcement pause, was unaffected by concentration. The duration of the postreinforcement pause increased with the schedule parameter, but the proportion of the interreinforcement interval taken up by the pause decreased. Consequently, the overall response rate was an increasing function of the schedule parameter; i.e., it was inversely related to reinforcement frequency, contrary to the law of effect. The running rate, however, decreased with the reinforcement frequency, in accord with the law of effect. When 50% of reinforcements were randomly omitted, the postomission pause was shorter than the postreinforcement pause, but the running rate of responses was not affected.
Journal of the experimental analysis of behavior, 1976 · doi:10.1901/jeab.1976.25-347