ABA Fundamentals

Choice and transformed interreinforcement intervals.

Moore (1984) · Journal of the experimental analysis of behavior 1984
★ The Verdict

A tiny exit rule can reverse preference even when pay rates stay the same.

✓ Read this if BCBAs who write reinforcement schedules in classrooms or clinics.
✗ Skip if Practitioners working solely with social praise without timed programs.

01Research in Context

01

What this study did

Pigeons pecked two keys in a lab chamber. Each key led to a different food schedule. The schedules looked the same on paper. The twist was how each schedule ended. Sometimes it stopped after the first food pellet. Other times it stopped after a set time. Parrott (1984) wanted to see if this small rule change would flip the birds' favorite key.

02

What they found

The birds' favorite key reversed when the ending rule changed. Same food rates. Same delays. Only the exit rule differed. The result shows that local rules beat overall statistics. Birds did not just count food per minute. They reacted to how each bout ended.

03

How this fits with other research

Alba et al. (1972) first showed that chaining cuts preference. Parrott (1984) goes further: even after chaining, you can flip choice by tweaking the exit rule. Duncan et al. (1972) also found that extra links hurt preference. Parrott (1984) agrees, but shows the hurt can be undone with a simple rule change.

Landon et al. (2002) later split short-term from long-term choice effects. Their data fit J's story: moment rules matter as much as long-run rates. Green et al. (1987) added probabilistic reinforcers and still saw orderly preference. Again, the local structure, not just the average payoff, drove the birds.

04

Why it matters

When you write a token board or set a timer, think exit rule, not just rate. A child may prefer a 5-min art session that ends after one sticker over a 5-min session that ends at the bell, even if stickers arrive at the same pace. Test both setups. Let the learner vote with their feet.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Run a two-key probe: same reinforcer rate, one ends after first reinforcer, one ends after 30 s; see which your client picks.

02At a glance

Intervention
not applicable
Design
single case other
Population
not specified
Finding
not reported

03Original abstract

Pigeons chose between two aperiodic, time-based schedules of reinforcement. The arithmetic mean interreinforcement interval of the first schedule was short, but the harmonic mean was long, whereas the arithmetic mean interreinforcement interval of the second schedule was long, but the harmonic mean was short. The pigeons preferred the schedule with the shorter harmonic mean in a concurrent-chains procedure when a terminal link ended after the first scheduled reinforcer had been gained on a terminal-link entry, but reversed their preferences, such that they preferred the schedule with the shorter arithmetic mean, when the terminal links ended after a fixed duration of exposure to the schedule. Moreover, the pigeons preferred the schedule with the shorter arithmetic mean in a two-key concurrent variable-interval variable-interval procedure, as well as in a concurrent variable-time variable-time, changeover-key procedure. The data suggest that an aggregate property of a schedule may not yield valid information about the responding that schedule will maintain as a choice alternative.

Journal of the experimental analysis of behavior, 1984 · doi:10.1901/jeab.1984.42-321