Concurrent schedules: reinforcer magnitude effects.
Bigger payoffs create sharper, stronger choice shifts—use magnitude as a fast dial to steer behavior.
01Research in Context
What this study did
Landon et al. (2003) used two simple levers side by side.
Pigeons pecked on concurrent VI-VI schedules.
The team swapped the size of grain payoff between the levers while keeping everything else the same.
They watched how each bird’s choice spiked right after a reinforcer.
What they found
Bigger grain piles made bigger “preference pulses.”
The birds shifted toward the rich lever faster and stayed longer.
Magnitude controlled choice almost as strongly as rate did.
How this fits with other research
Tyrer et al. (2009) later saw the same pattern on progressive-ratio schedules.
Larger rewards pushed the birds to work harder before quitting.
Levin et al. (2014) went further and showed big reinforcers also protect responding against extinction and pre-feeding.
Yet Cohen et al. (1993) seems to disagree.
They found reinforcers only gave a short-lived bump in stay time and left overall preference unchanged.
The clash fades when you look at the measures.
Jason counted quick choice shifts right after each payoff.
L et al. tracked long stay durations across the whole session.
Short pulses can exist even when molar preference stays flat.
Why it matters
You now have lab proof that reinforcer size drives moment-to-moment choice.
When a client drifts away from a task, first check the payoff amount, not just the schedule.
Boosting magnitude can pull them back faster than thinning rate.
Pair this with the resistance data from Levin et al. (2014) and you have a simple rule: start rich, fade lean, and the behavior will stick longer under challenges.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Add one extra token or a larger edible to the target option and watch for an immediate swing in the client’s selection.
02At a glance
03Original abstract
Five pigeons were trained on pairs of concurrent variable-interval schedules in a switching-key procedure. The arranged overall rate of reinforcement was constant in all conditions, and the reinforcer-magnitude ratios obtained from the two alternatives were varied over five levels. Each condition remained in effect for 65 sessions and the last 50 sessions of data from each condition were analyzed. At a molar level of analysis, preference was described well by a version of the generalized matching law, consistent with previous reports. More local analyses showed that recently obtained reinforcers had small measurable effects on current preference, with the most recently obtained reinforcer having a substantially larger effect. Larger reinforcers resulted in larger and longer preference pulses, and a small preference was maintained for the larger-magnitude alternative even after long inter-reinforcer intervals. These results are consistent with the notion that the variables controlling choice have both short- and long-term effects. Moreover, they suggest that control by reinforcer magnitude is exerted in a manner similar to control by reinforcer frequency. Lower sensitivities when reinforcer magnitude is varied are likely to be due to equal frequencies of different sized preference pulses, whereas higher sensitivities when reinforcer rates are varied might result from changes in the frequencies of different sized preference pulses.
Journal of the experimental analysis of behavior, 2003 · doi:10.1901/jeab.2003.79-351