Molar versus local reinforcement probability as determinants of stimulus value.
Stimulus value tracks the overall pay rate only while the other choice is visible; hide the choice and value flips to local hits.
01Research in Context
What this study did
The team worked with pigeons in a small lab chamber.
Birds could peck two keys. Each key gave grain on its own schedule.
Sometimes both keys were on at once. Other times only one key lit up.
The researchers watched which key the bird chose and how often it pecked.
What they found
When both keys were on, birds picked the key that paid off more often overall.
When only one key showed up, birds acted as if that key’s own hit-rate was all that mattered.
The same key was valued differently just because the other choice was hidden.
Context alone changed how good the stimulus felt.
How this fits with other research
Cicerone (1976) saw the same flip earlier. Long 3-min parts made birds watch local cues. Short 30-s parts made them watch the whole program. The new study adds the twist: simply hiding the other key does the same job.
Sherwell et al. (2014) later showed that a quick extra flash helps birds spot when the pay rate changes. Together the papers say: give clear time cues if you want local control; take them away if you want molar control.
Killeen (2023) bundles these ideas into one math frame. Rich schedules build heavy behavior that keeps going. The 1993 data feed that model by showing that even unseen rich schedules still pull choice.
Why it matters
Your client may “prefer” a task only when other tasks are in view. Take the alternatives off the table and the same task could look dull. Check both the big picture and the moment-to-moment pay before you tweak a program.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Run a quick concurrent probe: place two tasks side-by-side, then run each task alone and record latency to comply; compare to see if context shifts value.
02At a glance
03Original abstract
During one component of a multiple schedule, pigeons were trained on a discrete-trial concurrent variable-interval variable-interval schedule in which one alternative had a high scheduled rate of reinforcement and the other a low scheduled rate of reinforcement. When the choice proportion between the alternatives matched their respective relative reinforcement frequencies, the obtained probabilities of reinforcement (reinforcer per peck) were approximately equal. In alternate components of the multiple schedule, a single response alternative was presented with an intermediate scheduled rate of reinforcement. During probe trials, each alternative of the concurrent schedule was paired with the constant alternative. The stimulus correlated with the high reinforcement rate was preferred over that with the intermediate rate, whereas the stimulus correlated with the intermediate rate of reinforcement was preferred over that correlated with the low rate of reinforcement. Preference on probe tests was thus determined by the scheduled rate of reinforcement. Other subjects were presented all three alternatives individually, but with a distribution of trial frequency and reinforcement probability similar to that produced by the choice patterns of the original subjects. Here, preferences on probe tests were determined by the obtained probabilities of reinforcement. Comparison of the two sets of results indicates that the availability of a choice alternative, even when not responded to, affects the preference for that alternative. The results imply that models of choice that invoke only obtained probability of reinforcement as the controlling variable (e.g., melioration) are inadequate.
Journal of the experimental analysis of behavior, 1993 · doi:10.1901/jeab.1993.59-163