Two-key concurrent paced variable-interval paced variable-interval schedules of reinforcement.
Given two paced timing options, organisms pick the side with the shorter wait and the bigger payoff.
01Research in Context
What this study did
Scientists placed pigeons in a box with two keys. Each key paid off on its own clock. The clocks ticked at different speeds.
The birds had to wait a set time between pecks. If they pecked too soon, nothing happened. The team watched which key the bird chose next.
What they found
Birds switched keys to match the shortest wait time. They also followed the richer payoff. More food or faster food pulled their choice toward that side.
In plain words, pigeons acted like tiny statisticians. They tracked both delay and reward size, then picked the better deal.
How this fits with other research
Nevin (1969) first saw the "pause-then-burst" pattern on simple fixed-interval schedules. Sanders et al. (1971) show the same timing logic guides choices when two clocks run at once.
WEINELong (1963) found hungry rats pecked faster on single VI clocks. The new study keeps the pacing idea but asks, "What happens when the animal can leave one clock for another?"
Cameron et al. (1996) later added free food ticks. They saw pacing break down at very high reward rates. That warns us: even smart allocation has a saturation point.
Why it matters
Your client may not peck keys, but they still choose between tasks. Build two short-timing options into therapy games. Let the learner switch when one task drags. Watch which option they pick. If they move to the quicker payoff, you are seeing the same rule M et al. found. Use that data to shorten lag time on less preferred tasks and keep momentum high.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Set two short VI timers for different tasks; let the learner switch keys and record where they spend most time.
02At a glance
03Original abstract
Nine pigeons were used in two experiments in which a response was reinforced if a variable-interval schedule had assigned a reinforcement and if the response terminated an interresponse time within a certain interval, or class, of interresponse times. One such class was scheduled on one key, and a second class was scheduled on a second key. The procedure was, therefore, a two-key concurrent paced variable-interval paced variable-interval schedule. In Exp. I, the lengths of the two reinforced interresponse times were varied. The relative frequency of responding on a key approximately equalled the relative reciprocal of the length of the interresponse time reinforced on that key. In Exp. II, the relative frequency and relative magnitude of reinforcement were varied. The relative frequency of responding on the key for which the shorter interresponse time was reinforced was a monotonically increasing, negatively accelerated function of the relative frequency of reinforcement on that key. The relative frequency of responding depended on the relative magnitude of reinforcement in approximately the same way as it depended on the relative frequency of reinforcement. The relative frequency of responding on the key for which the shorter interresponse time was reinforced depended on the lengths of the two reinforced interresponse times and on the relative frequency and relative magnitude of reinforcement in the same way as the relative frequency of the shorter interresponse time depended on these variables in previous one-key concurrent schedules of reinforcement for two interresponse times.
Journal of the experimental analysis of behavior, 1971 · doi:10.1901/jeab.1971.16-39