ABA Fundamentals

Concurrent schedules: discriminating reinforcer-ratio reversals at a fixed time after the previous reinforcer.

Cowie et al. (2013) · Journal of the experimental analysis of behavior 2013
★ The Verdict

After each reinforcer, choice slides—not snaps—into the new payoff ratio.

✓ Read this if BCBAs running concurrent schedules or token economies who track moment-to-moment shifts.
✗ Skip if Clinicians working on single-operant or DTT drills where ratio changes are rare.

01Research in Context

01

What this study did

Researchers watched pigeons peck two keys. Each key paid off on its own variable-interval schedule. After every food delivery, the payoff ratio flipped. The team tracked which key the bird hit next and how that choice changed as time passed.

They wanted to see if the birds would switch keys right away or ease into the new ratio.

02

What they found

Choice did not jump the instant the ratio reversed. Instead, it drifted downhill like a ball on a gentle slope. The birds’ next few pecks still followed the old ratio, then slowly bent toward the new one.

A math model fit the curve by spreading the old and new ratios across the seconds since the last piece of food.

03

How this fits with other research

Hastings et al. (2001) ran a similar two-key pigeon setup and saw clean matching to the new ratio almost at once. The difference: they watched long-run totals, not the first pecks after food. The slow drift only shows up when you zoom in on the local moment.

Davison et al. (2010) added flashing lights that told the birds which key would pay next. With those cues, choice flipped faster. Take the cues away and the gradual curve returns—same birds, same box, just less information.

Baer (1974) framed reinforcement as a ‘situation change.’ That idea fits the new data: the food delivery marks a new ‘situation,’ but the bird needs a few seconds to feel the new payoff winds.

04

Why it matters

If you run concurrent schedules in the clinic, do not expect an immediate jump when you flip the token chart. Watch the first few responses after each reinforcer; they still echo the old rate. Give the learner a short window—extra prompts, richer schedule, or salient cues—to help the shift happen faster. The matching law still rules, but it rides a gentle slope, not a cliff.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Count the first five responses after each token; if they still favor the old side, add a brief stimulus cue to speed the switch.

02At a glance

Intervention
other
Design
single case other
Sample size
6
Population
other
Finding
not reported

03Original abstract

Six pigeons worked on concurrent exponential variable-interval schedules in which the relative frequency of food deliveries for responding on the two alternatives reversed at a fixed time after each food delivery. Across conditions, the point of food-ratio reversal was varied from 10 s to 30 s, and the overall reinforcer rate was varied from 1.33 to 4 per minute. The effect of rate of food delivery and food-ratio-reversal time on choice and response rates was small. In all conditions, postfood choice was toward the locally richer key, regardless of the last-food location. Unlike the local food ratio which changed in a stepwise fashion, local choice changed according to a decelerating monotonic function, becoming substantially less extreme than the local food ratio soon after food delivery. This deviation in choice appeared to result from the birds' inaccurate discrimination of the time of food deliveries; local choice was described well by a model that assumed that log response ratios matched food ratios that were redistributed across surrounding time bins with mean time t and a constant coefficient of variation. We suggest that local choice is controlled by the likely availability of food in time, and that choice matches the discriminated log of the ratio of food rates across time since the last food delivery.

Journal of the experimental analysis of behavior, 2013 · doi:10.1002/jeab.43