ABA Fundamentals

The effects of a local negative feedback function between choice and relative reinforcer rate.

Davison et al. (2010) · Journal of the experimental analysis of behavior 2010
★ The Verdict

Strong future penalties can shrink extended choice even when each response still earns a reinforcer.

✓ Read this if BCBAs who run concurrent schedules or token economies in clinics or classrooms.
✗ Skip if RBTs who only run simple discrete-trial programs with no concurrent options.

01Research in Context

01

What this study did

Michael and team worked with pigeons in a lab.

The birds pecked two keys for food on concurrent VI schedules.

The twist: the more time a bird spent on one key, the less food that key gave later.

They called this a negative feedback loop.

The team then made that loop stronger or weaker to see what happened.

02

What they found

When the feedback got steeper, birds spent less time on the richer key.

The drop showed up in long sessions, not in quick peck-for-peck choices.

In plain words, big future penalties can shrink overall preference even if each peck still pays.

03

How this fits with other research

DeRoma et al. (2004) first showed pigeons shift choice fast after each food drop.

Davison et al. (2010) used the same setup but added the feedback rule.

The new study extends the 2004 work by proving molar rules can override those fast shifts.

Fahmie et al. (2013) found that past change-over delays can later boost preference.

Together, the three papers show both history and future consequences steer choice.

Van Hemel (1973) and Green et al. (1975) tracked moment-to-moment peck rates.

Their local patterns still appeared here, but the molar feedback ruled the day.

No clash exists; the studies just zoom in at different time scales.

04

Why it matters

If you use token boards or DRO, remember that future loss can cut behavior now.

Try adding a mild penalty for staying too long on one task.

Watch the whole session, not just the last minute, to see the effect.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Add a small token-removal rule if a client stays on one task too long, then track total time on each task across the whole session.

02At a glance

Intervention
not applicable
Design
single case other
Sample size
4
Population
not specified
Finding
negative
Magnitude
large

03Original abstract

Four pigeons were trained on two-key concurrent variable-interval schedules with no changeover delay. In Phase 1, relative reinforcers on the two alternatives were varied over five conditions from .1 to .9. In Phases 2 and 3, we instituted a molar feedback function between relative choice in an interreinforcer interval and the probability of reinforcers on the two keys ending the next interreinforcer interval. The feedback function was linear, and was negatively sloped so that more extreme choice in an interreinforcer interval made it more likely that a reinforcer would be available on the other key at the end of the next interval. The slope of the feedback function was -1 in Phase 2 and -3 in Phase 3. We varied relative reinforcers in each of these phases by changing the intercept of the feedback function. Little effect of the feedback functions was discernible at the local (interreinforcer interval) level, but choice measured at an extended level across sessions was strongly and significantly decreased by increasing the negative slope of the feedback function.

Journal of the experimental analysis of behavior, 2010 · doi:10.1901/jeab.2010.94-197