Delayed reinforcement versus reinforcement after a fixed interval.
Equal inter-reinforcement times erase any preference between delayed and fixed-interval reinforcement.
01Research in Context
What this study did
Neuringer (1969) let pigeons choose between two keys. One key gave food after a fixed delay. The other key gave food after a fixed-interval schedule.
Both keys had the same average time between meals. The birds could switch any time. The team watched which key the birds pecked.
What they found
The pigeons had no favorite. They pecked each key about the same number of times.
As long as the wait for food was equal, the birds did not care if the delay came all at once or was spread across an interval.
How this fits with other research
Vaughan (1985) later showed pigeons do care about delay when probability is also in play. Birds accepted longer waits to get twice-as-likely food. The 1969 study kept probability the same, so delay alone did not tip the scale.
Gibbon (1967) and Nevin (1969) mapped the classic fixed-interval pattern: long pause, then fast pecking. Neuringer (1969) used that same pattern but proved the pause-plus-run can be matched by a simple delayed reinforcer of equal length.
Lancioni et al. (2011) stretched the gap between trials instead of within trials. Rats stayed sensitive to that gap; pigeons stopped caring unless forced to keep choosing. Together these papers show pigeons lock onto the moment food is due and ignore other time cues.
Why it matters
For BCBAs the lesson is simple: match the inter-reinforcement time, not the schedule name. A 10-s fixed interval and a 10-s plain delay feel the same to the learner. When you build token boards, delayed praise, or staggered check-ins, focus on the total wait time. If the wait is equal, the client will not prefer one setup over the other, letting you pick the easier one to deliver.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pick the simpler delay method; the learner won’t mind if the wait matches the old FI length.
02At a glance
03Original abstract
When interreinforcement intervals were equated, pigeons demonstrated little or no preference between reinforcement after a delay interval and reinforcement presented on a fixed-interval schedule. The small preferences sometimes found for the fixed interval (a) were considerably smaller than when the delay and fixed intervals differed in duration, and (b) were caused by the absence of light during the delay. These results suggest that the effects of delayed reinforcement on prior responding can be reproduced by imposing a temporally equal fixed-interval schedule in place of the delay; and, therefore, that the time between a response and reinforcement controls the probability of that response, whether other responses intervene or not.
Journal of the experimental analysis of behavior, 1969 · doi:10.1901/jeab.1969.12-375