Every reinforcer counts: reinforcer magnitude and local preference.
A single large reinforcer briefly boosts preference and keeps clients at the task longer.
01Research in Context
What this study did
Davison et al. (2003) watched how rats chose when food came in two sizes.
The team switched the sizes every few minutes and tracked where the rats stayed.
They wanted to see if bigger snacks pulled the animals in and kept them longer.
What they found
Larger food pellets acted like magnets.
Animals spent more time at the spot that had just delivered the big snack.
The longer each size stayed available, the clearer this pull became.
How this fits with other research
Frost et al. (1996) saw the same pull in pigeons: bigger grain pellets made birds peck faster and open their beaks wider.
The two papers line up like Lego bricks—same lab, same year, same story: size matters.
Yet Oliver et al. (2002) looks like a puzzle piece that will not fit. They gave children 20 s or 300 s of toys for communication responses and found almost no extra staying power from the big reward.
The gap is about purpose. Michael studied free choice; C studied already-learned requests. Once the response is locked in, extra size quits helping.
Why it matters
When you run a preference assessment, lead with the biggest, best item you can safely give. A single large piece can spike momentary choice and stretch the time a client stays engaged. After the skill is learned, you can cut the size and keep the performance—saving time and calories without losing the response.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Start your next preference assessment with the largest edible or toy portion you plan to use—watch if the client lingers longer at that option.
02At a glance
03Original abstract
Six pigeons were trained on concurrent variable-interval schedules. Sessions consisted of seven components, each lasting 10 reinforcers, with the conditions of reinforcement differing between components. The component sequence was randomly selected without replacement. In Experiment 1, the concurrent-schedule reinforcer ratios in components were all equal to 1.0, but across components reinforcer-magnitude ratios varied from 1:7 through 7:1. Three different overall reinforcer rates were arranged across conditions. In Experiment 2, the reinforcer-rate ratios varied across components from 27:1 to 1:27, and the reinforcer-magnitude ratios for each alternative were changed across conditions from 1:7 to 7:1. The results of Experiment 1 replicated the results for changing reinforcer-rate ratios across components reported by Davison and Baum (2000, 2002): Sensitivity to reinforcer-magnitude ratios increased with increasing numbers of reinforcers in components. Sensitivity to magnitude ratio, however, fell short of sensitivity to reinforcer-rate ratio. The degree of carryover from component to component depended on the reinforcer rate. Larger reinforcers produced larger and longer postreinforcer preference pulses than did smaller reinforcers. Similar results were found in Experiment 2, except that sensitivity to reinforcer magnitude was considerably higher and was greater for magnitudes that differed more from one another. Visit durations following reinforcers measured either as number of responses emitted or time spent responding before a changeover were longer following larger than following smaller reinforcers, and were longer following sequences of same reinforcers than following other sequences. The results add to the growing body of research that informs model building at local levels.
Journal of the experimental analysis of behavior, 2003 · doi:10.1901/jeab.2003.80-95