ABA Fundamentals

IRT-stimulus contingencies in chained schedules: implications for the concept of conditioned reinforcement.

Bejarano et al. (2007) · Journal of the experimental analysis of behavior 2007
★ The Verdict

Stimulus change alone can reinforce pauses, but the classic delay-to-food rule does not decide its strength.

✓ Read this if BCBAs who use tokens, lights, or brief praise as conditioned reinforcers in skill-acquisition or self-control programs.
✗ Skip if Clinicians working only with edible reinforcement and no chained schedules.

01Research in Context

01

What this study did

Rafael and team worked with pigeons on chained schedules.

The birds had to pause a set time before the next light would turn on.

Only that pause let the pigeon move to the next link and reach food.

The setup tested if the light change itself could act like a reinforcer.

02

What they found

The pigeons quickly learned to stretch their pauses when the pause triggered the next light.

This shows a simple stimulus change can strengthen behavior.

Yet the pause length did not follow the usual delay-to-food rule for conditioned reinforcers.

So the old rule missed the mark here.

03

How this fits with other research

Baer (1974) first said reinforcement is just "situation transition." Rafael’s data agree that light changes matter, but they chip away at the delay rule that paper left intact.

MANDLER et al. (1962) showed birds keep pecking when a brief light marks progress on fixed-ratio 10. Their work supports conditioned reinforcement, yet Rafael finds the same delay rule fails when the light is tied to a pause, not a peck count.

Davis et al. (1994) proved timing is key: a stimulus must come right after the response to speed learning. Rafael keeps the tight timing but shows contiguity alone does not predict strength, adding a boundary condition to that rule.

04

Why it matters

If you use tokens, praise, or lights as reinforcers, remember the old delay rule is only part of the story. A stimulus swap can strengthen behavior even when the swap does not shorten time to backup reinforcement. Try pairing task shifts or brief visual cues with desired pauses or transitions, then watch if the response pattern follows the new contingency instead of the delay clock.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Program a brief light or sound right after the client holds a pause, then track if the pause lengthens even though food remains seconds away.

02At a glance

Intervention
not applicable
Design
single case other
Population
not specified
Finding
not reported

03Original abstract

Two experiments with pigeons investigated the effects of contingencies between interresponse times (IRTs) and the transitions between the components of 2- and 4-component chained schedules (Experiments 1 and 2, respectively). The probability of component transitions varied directly with the most recent (Lag 0) IRT in some experimental conditions and with the 4th (Lag 4) IRT preceding the most recent one in others. Mean component durations were constant across conditions, so the reinforcing effect of stimulus change was dissociated from that of delay to food. IRTs were longer in the Lag-0 than in the Lag-4 conditions of both experiments, thus demonstrating that stimulus change functioned as a reinforcer. In the Lag-0 conditions of Experiment 2, the Component-1 IRTs increased more than the Component-2 IRTs, which in turn increased more than the Component-3 IRTs. This finding runs counter to the conditioned-positive-reinforcement account of chained-schedule responding, which holds that the reinforcing effect of stimulus change should vary in strength as an inverse function of the delay to the unconditioned reinforcer at the end of the chain because conditioned reinforcement is due to first- or higher-order classical conditioning. Therefore, we present other possible explanations for this effect.

Journal of the experimental analysis of behavior, 2007 · doi:10.1901/jeab.2007.87-03