ABA Fundamentals

Molecular contingencies: reinforcement probability.

Hale et al. (1975) · Journal of the experimental analysis of behavior 1975
★ The Verdict

Animals lock onto momentary payoff odds, so shift reinforcer chances smoothly and watch local data.

✓ Read this if BCBAs who run concurrent or mixed schedules in clinics or labs.
✗ Skip if Clinicians who keep one fixed reinforcement rate all day.

01Research in Context

01

What this study did

Researchers put pigeons in a two-key box. Each key gave food at a different chance.

The chance changed every few seconds. Birds had to pick the key that paid off best right now.

The team watched which key the bird hit trial after trial.

02

What they found

Birds did not just copy the overall odds. They leaned toward the key with the better instant chance.

In other words, pigeons tracked moment-to-moment payoff, not long-run rate.

03

How this fits with other research

Emmelkamp et al. (1986) saw matching, not maximizing, when birds worked under a VT-VRT schedule. The clash fades once you see the 1975 study used quick, discrete trials while 1986 used steady concurrent schedules.

Iwata (1993) later showed that scheduled, never-seen rates still sway choice. That finding adds a layer: local payoff rules, yet the wider plan still whispers.

Alvarez et al. (1998) gave more proof that nearby cues, not global time gaps, steer picks. Together the papers say, watch the local moment, but keep the big picture on the radar.

04

Why it matters

Your client, like the pigeon, reacts to what just paid off. If you switch reinforcer odds within a session, expect rapid drift toward the richer moment, even if the overall plan stays the same. Track minute-by-minute data, not just session totals, to spot these micro-shifts and adjust teaching or thinning steps on the fly.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Graph choice per minute, not per session, to see if learners chase short-term richer options.

02At a glance

Intervention
other
Design
single case other
Population
not specified
Finding
positive

03Original abstract

Pigeons obtained food by responding in a discrete-trials two-choice probability-learning experiment involving temporal stimuli. A given response alternative, a left- or right-key peck, had 11 associated reinforcement probabilities within each session. Reinforcement probability for a choice was an increasing or a decreasing function of the time interval immediately preceding the choice. The 11 equiprobable temporal stimuli ranged from 1 to 11 sec in 1-sec classes. Preference tended to deviate from probability matching in the direction of maximizing; i.e., the percentage of choices of the preferred response alternative tended to exceed the probability of reinforcement for that alternative. This result was qualitatively consistent with probability-learning experiments using visual stimuli. The result is consistent with a molecular analysis of operant behavior and poses a difficulty for molar theories holding that local variations in reinforcement probability may safely be disregarded in the analysis of behavior maintained by operant paradigms.

Journal of the experimental analysis of behavior, 1975 · doi:10.1901/jeab.1975.24-315