ABA Fundamentals

Choice and foraging.

Abarca et al. (1982) · Journal of the experimental analysis of behavior 1982
★ The Verdict

Long waits make delayed reinforcement more acceptable—time your schedule stretches right after big gaps.

✓ Read this if BCBAs shaping token boards, DRL, or FCT with long wait periods.
✗ Skip if Clinicians working only with immediate continuous reinforcement.

01Research in Context

01

What this study did

Researchers watched pigeons peck keys in a two-step game. First the birds had to wait a set time. Then they could pick a key that gave food right away or one that made them wait longer.

The team changed only the first wait. Sometimes it was short, sometimes long. They counted how often birds took the longer final delay after each first wait.

02

What they found

After long first waits, pigeons accepted the longer final delay about twice as often. After short first waits, they rarely took the extra delay.

The data lined up with delay-reduction math: the longer the search, the more valuable any reward feels, even if it is slower.

03

How this fits with other research

Kydd et al. (1982) ran a near-copy study the same year. They equated total delay and saw no choice shift. The two papers seem to clash, but the trick is timing. N et al. let the first wait vary; R et al. kept total wait flat. When total wait is locked, birds stop caring about extra seconds.

Steege et al. (1989) later added humans. People chased overall rate, while pigeons still followed delay-reduction. The bird rule held; the species just used it differently.

Soreth et al. (2009) stretched the idea further. They showed that one rare quick payoff in a random schedule can pull choice toward that side. Again, local short waits steer the decision.

04

Why it matters

If a client has just waited a long time for your attention, they may now accept a longer task or bigger work requirement. Use that moment to place the harder skill. After short waits, keep demands light and reinforcement fast. Track the client's "search" time—how long since last reinforcement—and let that guide when you stretch the schedule.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

After a 2-minute wait for attention, ask for 5 more responses before the next token.

02At a glance

Intervention
not applicable
Design
single case other
Sample size
6
Population
not specified
Finding
positive

03Original abstract

In Experiment 1, six naive pigeons were trained on a foraging schedule characterized by different states beginning with a search state in which completion of a fixed-interval on a white key led to a choice state. In the choice state the subject could, by appropriate responding on a fixed ratio of three, either accept or reject the schedule of reinforcement that was offered (either a variable-interval five-second or a variable-interval 20-second). If the subject accepted the schedule, it entered a "handling state" in which the appropriate variable-interval schedule was presented. Completion of the variable-interval schedule produced food. The independent variable was the fixed-interval value in the search state, and the dependent variable was the rate of acceptance of the long variable-interval in the choice state. Experiment 2 was identical except that the search state required completion of a variable-interval, instead of a fixed-interval, schedule. The rate of acceptance of the long variable-interval schedule in both experiments was a direct function of the length of the search state, in accordance with both optimality theory and the delay-reduction hypothesis.

Journal of the experimental analysis of behavior, 1982 · doi:10.1901/jeab.1982.38-117