Effects Of Histories Of Differential Reinforcement Of Response Rate On Variable-interval Responding.
Old DRH or DRL schedules can quietly push response rates up or down on current VI programs months later.
01Research in Context
What this study did
Researchers worked with pigeons on a variable-interval schedule. First they gave the birds a history of either DRH or DRL. DRH pays for fast responses. DRL pays for slow responses.
After training, the birds moved to a plain VI schedule. The team watched if the old rules still changed response speed months later.
What they found
Pigeons with a DRH history kept pecking fast on VI. Birds with a DRL history stayed slow. The bias lasted even after a six-month break.
The effect was strong enough to flip typical VI response rates. History, not just current contingencies, drove the behavior.
How this fits with other research
Matson et al. (1994) saw the opposite pattern. They used progressive-ratio schedules and found history effects vanished quickly. The key difference is schedule type: PR effects wash out; VI effects linger.
Cicerone (1976) showed DRL alone can control long pauses. Adkins et al. (1997) goes further, proving a past DRL schedule still slows birds months later on VI.
LeBlanc et al. (2003) link richer schedules to stronger, disruption-resistant behavior. DRH history in the target paper mirrors this: more reinforcement history yields faster, tougher response rates.
Why it matters
If a client responds too fast or too slow on a new VI program, probe their reinforcement history. A past DRL or DRH schedule could still be pulling the strings. Run a few probe sessions, review old data, or ask previous therapists. Adjust your VI values or add brief extinction bursts to counteract the legacy effects.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Before your next VI session, graph the learner's past DRH/DRL exposure, then run a one-minute probe to see if rates match history.
02At a glance
03Original abstract
Three pigeons were exposed first to multiple differential‐reinforcement‐of‐high‐rate and differential‐reinforcement‐of‐low‐rate schedules that were correlated with green and red keys, respectively, and then were shifted to a variable‐interval schedule arranged on a white key. In subsequent test sessions, the variable‐interval schedule continued to operate, but green and red keys replaced the white key in alternate sessions. In Part 1 of the experiment, the variable‐interval schedule correlated with the white key was introduced immediately after the multiple‐schedule condition, and the test condition began 15 days later. This sequence was repeated twice, with a reversal of the correlation of the key colors with the components of the multiple schedule at the start of each new cycle. Part 2 added a 6‐month break between the multiple‐schedule history and the white‐key variable‐interval schedule followed by test sessions. The procedure was then repeated with a reversal of the correlation between key colors and multiple‐schedule components. In the test sessions of Part 1, all pigeons consistently responded faster in the presence of the key color most recently correlated with the differential‐reinforcement‐of‐high‐rate contingency than during the color most recently correlated with the differential‐reinforcement‐of‐low‐rate contingency. Similar but smaller effects were observed in Part 2. The effects of the reversals in these two parts of the experiment showed that only the most recent contingency exerted an influence on subsequent responding. The data suggest that this effect of the most recent history continues to operate on behavior under current contingencies even after a long lapse of time.
Journal of the experimental analysis of behavior, 1997 · doi:10.1901/jeab.1997.67-311