Choice, contingency discrimination, and foraging theory.
Counting visits per schedule, not just overall responses, shows that a lean-key bias causes the undermatching we usually see.
01Research in Context
What this study did
Researchers set up two keys for pigeons. Each key paid food on its own variable-interval schedule.
The schedules were stretched to extreme ratios. The team tracked every peck and switch to see how well the birds matched their time and responses to the payoff rates.
They then looked at the data visit-by-visit, like a forager counting berries per bush, instead of just pooling all pecks.
What they found
The birds did not follow a strict matching law. They showed undermatching: their choices were flatter than the payoff ratios.
A lean-schedule bias explained the flat slope. Birds gave the lean key a few extra visits, just as a forager might check a poor bush again before moving on.
How this fits with other research
White (1979) had already shown that most choice data sit near a 0.9 slope. W et al. reveal why: the lean-key bias drags the line down.
Baer (1974) split deviations into bias versus sensitivity. The 1999 data plug real numbers into that old frame—bias (k) is above 1 toward the lean key, while sensitivity (a) stays under 1.
Madden et al. (2003) later found overmatching in a boy’s self-injury. Same math, opposite slope. The difference is setting: lab pigeons with edible reinforcers give undermatching; real-world attention can create overmatching.
Why it matters
When you run a concurrent-schedule preference assessment, expect shallow slopes and do not call it “failed matching.” Count visits, not just total responses. If the lean alternative gets one or two extra checks, correct for that bias before you pick the next reinforcer strength.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Tally each approach to both options for one session; if the lean side gets extra visits, raise its payoff rate or subtract those visits before you calculate preference.
02At a glance
03Original abstract
Four pigeons were trained on eight or nine pairs of independent concurrent variable‐interval schedules. The range of reinforcement ratios included extreme ratios (up to 532 to 1). Large samples of stable performance were gathered. Contrary to the findings of Davison and Jones (1995), the generalized matching law described choice more accurately than a contingency‐discriminability model. Taking small samples (5 to 10 sessions) and applying a more liberal stability criterion used by Davison and Jones only increased the unsystematic variance in the data and in estimates of generalized‐matching‐law sensitivity. Because changing to dependent scheduling and inserting a changeover delay had no systematic effect, the deviations from generalized matching reported by Davison and Jones probably arose from imperfectly discriminated stimuli. Analysis of visits revealed that visits to the nonpreferred alternative were brief and approximately constant. When choice between the preferred (rich) and nonpreferred (lean) alternatives, regardless of position, was analyzed according to the generalized matching law, sensitivities approximated 1.0, with bias in favor of the lean alternative. This bias, which arose from an excessive frequency of visits to the lean alternative, explains undermatching as the result of fitting one line to a choice relation that consists of two displaced lines, both with a slope of 1.0. The pattern of deviation from the generalized matching line confirmed this account. The findings suggest an alternative analysis of choice that focuses on probability of visiting the lean alternative as the dependent variable. This probability was directly proportional to ratio of reinforcement. Matching, undermatching, and overmatching may all be explained by a view of concurrent performance based on foraging theory, in which responding occurs primarily at the rich alternative and is occasionally interrupted by brief visits to the lean alternative.
Journal of the experimental analysis of behavior, 1999 · doi:10.1901/jeab.1999.71-355