ABA Fundamentals

A comparison of delays and ratio requirements in self-control choice.

Grossbard et al. (1986) · Journal of the experimental analysis of behavior 1986
★ The Verdict

Given equal delay, organisms prefer schedules that require no work, showing that response cost itself devalues reinforcers.

✓ Read this if BCBAs writing skill-acquisition programs or token economies where response effort may quietly suppress engagement.
✗ Skip if Clinicians focused solely on antecedent or sensory interventions with no schedule component.

01Research in Context

01

What this study did

Pigeons pecked two keys. One key gave food after a short wait. The other key gave the same food after the same wait plus a work requirement.

The team drew indifference curves. They wanted to see if delay and ratio schedules feel the same to the bird.

02

What they found

The curves for delay and ratio looked alike. Both bent the same way, just as a hyperbolic value equation predicts.

When total time to food was equal, the birds always picked the key that asked for no pecks. Work itself carried a cost beyond the wait.

03

How this fits with other research

Quilitch et al. (1973) showed that choice follows relative inverse delay squared. Odom et al. (1986) adds that, once delay is fixed, extra ratio work still hurts value.

Delano (2007) later kept the hyperbolic shape but swapped schedule type. Fixed versus mixed search shifts choice even when delay and payoff stay put. The 1986 curves gave that idea its first wings.

Calamari et al. (1987) looked at FR versus progressive-ratio the next year. Both papers agree: what matters is the cumulative delay across trials, not the local effort. The 1986 study just stripped the task down to the cleanest case—FR versus FT.

04

Why it matters

When you design reinforcement schedules, remember that work is a cost. If two paths deliver the same reinforcer after the same delay, the one with fewer responses will be picked. For clients who tire easily, drop extra response requirements. Replace long FR drills with brief waits or differential reinforcement of zero responding. You keep the same payoff pace but remove hidden effort that drives escape and problem behavior.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Audit one client's FR schedule: if the delay to reinforcement could be matched by a simple wait, try a brief FT or DRO interval and see if responding stays steady with less effort.

02At a glance

Intervention
not applicable
Design
single case other
Population
neurotypical
Finding
not reported

03Original abstract

In a discrete-trial procedure, pigeons could choose between 2-s and 6-s access to grain by making a single key peck. In Phase 1, the pigeons obtained both reinforcers by responding on fixed-ratio schedules. In Phase 2, they received both reinforcers after simple delays, arranged by fixed-time schedules, during which no responses were required. In Phase 3, the 2-s reinforcer was available through a fixed-time schedule and the 6-s reinforcer was available through a fixed-ratio schedule. In all conditions, the size of the delay or ratio leading to the 6-s reinforcer was systematically increased or decreased several times each session, permitting estimation of an "indifference point," the schedule size at which a subject chose each alternative equally often. By varying the size of the schedule for the 2-s reinforcer across conditions, several such indifference points were obtained from both fixed-time conditions and fixed-ratio conditions. The resulting "indifference curves" from fixed-time conditions and from fixed-ratio conditions were similar in shape, and they suggested that a hyperbolic equation describes the relation between ratio size and reinforcement value as well as the relation between reinforcer delay and its reinforcement value. The results from Phase 3 showed that subjects chose fixed-time schedules over fixed-ratio schedules that generated the same average times between a choice response and reinforcement.

Journal of the experimental analysis of behavior, 1986 · doi:10.1901/jeab.1986.45-305