ABA Fundamentals

Choice in situations of time-based diminishing returns: immediate versus delayed consequences of action.

Hackenberg et al. (1992) · Journal of the experimental analysis of behavior 1992
★ The Verdict

An escape hatch that erases hard work makes animals bail out sooner—choice follows discounted cumulative value, not just the next reward.

✓ Read this if BCBAs designing skill-acquisition or self-management programs where task difficulty climbs.
✗ Skip if Clinicians focused only on discrete-trial drills with fixed criteria each trial.

01Research in Context

01

What this study did

Hassin-Herman et al. (1992) let pigeons pick between two keys. One key gave food only after more and more pecks. The other key gave food after a steady, smaller number of pecks.

A twist: if the bird pecked the steady key, the growing-work key reset to easy again. The team watched when birds jumped off the growing work and how delays shaped that jump.

02

What they found

Birds quit the growing-work key sooner when the steady key also wiped the growing counter clean. Choice tracked the total food each side would earn, then discounted by the wait still ahead.

In plain words: the pigeons acted as if they added up future snacks, shrank that total for every second of delay, and switched when the discounted total beat the alternative.

03

How this fits with other research

Calamari et al. (1987) ran a similar fixed- versus progressive-ratio test but without the reset. Back then, birds stayed longer on the growing schedule. Adding the reset in 1992 pulled the switch point earlier, sharpening the power of cumulative, delayed payoffs.

Gowen et al. (2013) later showed the same reversal can happen inside one session as imposed delays grow. Both papers say preference is fluid, not locked to the immediate next pellet.

Landon et al. (2003) found that breaking a reinforcement streak instantly bends later choice. The 1992 reset is another kind of break—wiping work history—and both studies agree that local shocks to cumulative value steer behavior.

04

Why it matters

For your clients, think of reset value. Letting a learner escape a mounting task and start fresh can make the escape more attractive than sticking it out. If you want persistence, avoid resets that wipe effort; if you want shift, build in a clean-slate option and watch choice move earlier, just like the pigeons.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Plot a client's work count across trials; insert a brief 'reset' break option and note if they switch earlier—then adjust task progression or break rules to sustain engagement.

02At a glance

Intervention
not applicable
Design
single case other
Finding
not reported

03Original abstract

Pigeons chose between two schedules of food presentation, a fixed-interval schedule and a progressive-interval schedule that began at 0 s and increased by 20 s with each food delivery provided by that schedule. Choosing one schedule disabled the alternate schedule and stimuli until the requirements of the chosen schedule were satisfied, at which point both schedules were again made available. Fixed-interval duration remained constant within individual sessions but varied across conditions. Under reset conditions, completing the fixed-interval schedule not only produced food but also reset the progressive interval to its minimum. Blocks of sessions under the reset procedure were interspersed with sessions under a no-reset procedure, in which the progressive schedule value increased independent of fixed-interval choices. Median points of switching from the progressive to the fixed schedule varied systematically with fixed-interval value, and were consistently lower during reset than during no-reset conditions. Under the latter, each subject's choices of the progressive-interval schedule persisted beyond the point at which its requirements equaled those of the fixed-interval schedule at all but the highest fixed-interval value. Under the reset procedure, switching occurred at or prior to that equality point. These results qualitatively confirm molar analyses of schedule preference and some versions of optimality theory, but they are more adequately characterized by a model of schedule preference based on the cumulated values of multiple reinforcers, weighted in inverse proportion to the delay between the choice and each successive reinforcer.

Journal of the experimental analysis of behavior, 1992 · doi:10.1901/jeab.1992.57-67