STIMULUS GENERALIZATION AND THE RESPONSE-REINFORCEMENT CONTINGENCY.
DRL and long VI schedules flatten stimulus generalization, so choose your reinforcement schedule to match the sharpness of stimulus control you need.
01Research in Context
What this study did
HEARSKELLEHER et al. (1964) tested how different reinforcement schedules shape stimulus generalization. They trained pigeons to peck a key under three schedules: DRL, short VI, and long VI. After training, they presented colors that gradually shifted away from the trained hue and recorded how responding spread across those new colors.
The goal was to see whether the schedule itself changes how sharply the birds discriminate between similar stimuli.
What they found
DRL and longer VI schedules flattened the generalization gradient. That means the birds pecked almost as often at colors that looked different from the training color. The schedule had become a stronger cue than the color itself.
Shorter VI kept the peak sharp; most pecks stayed near the original hue.
How this fits with other research
Powell et al. (1968) later showed that simply adding more discrimination sessions steepens the gradient again. Their work acts as a successor: once you know schedule matters, you can override the flattening by giving extra training.
Okouchi (2003) extended the same gradient idea to college students using line lengths and got the same asymmetric curve, proving the effect is not just for pigeons.
Wesp et al. (1981) pulled these threads together in a big review. They list schedule type as one of the key levers you can turn to make stimulus control tighter or looser.
HOFFMAN et al. (1964), from the same year and lab, swapped the schedule for an aversive contingency and still saw gradient changes, showing the effect holds across reinforcement and punishment setups.
Why it matters
When you want tight stimulus control—say, a child to respond only to the exact word “cat” and not “bat” or “cap”—avoid DRL or very lean VI schedules during teaching. Instead, use richer VI or add extra discrimination trials like Powell et al. (1968) to steepen the gradient. When you need broader generalization, a DRL or stretched VI can help responses spread naturally across similar stimuli.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Run a quick probe: after teaching with a dense schedule, test responding to slightly different stimuli; then repeat after a DRL session and compare the spread.
02At a glance
03Original abstract
Generalization gradients along a line-tilt continuum were obtained from groups of pigeons that had been trained to peck a key on different schedules of reinforcement. In Exp. I, gradients following training on a differential-reinforcement-of-low-rate (DRL) schedule proved to be much flatter than gradients following the usual 1-min variable interval (VI) training. In Exp. II, the value of the VI schedule itself was parametrically studied; Ss trained on long VI schedules (e.g., 4-min) produced much flatter gradients than Ss trained on short VI schedules (30-sec; 1-min). The results were interpreted mainly in terms of the relative control exerted by internal, proprioceptive cues on the different reinforcement schedules. Several implications of the results for other problems in the field of stimulus generalization are discussed.
Journal of the experimental analysis of behavior, 1964 · doi:10.1901/jeab.1964.7-369