ABA Fundamentals

A quantitative analysis of the responding maintained by interval schedules of reinforcement.

Catania et al. (1968) · Journal of the experimental analysis of behavior 1968
★ The Verdict

Local timing of rewards, not daily totals, drives response rate.

✓ Read this if BCBAs writing token, DRO, or VI programs for any client.
✗ Skip if Practitioners who only use fixed-ratio or DTT with no timing tweaks.

01Research in Context

01

What this study did

Davison et al. (1968) ran six small experiments with pigeons on interval schedules.

They wanted to know if birds peck faster because the overall rate of food is high, or because food is likely right now.

They kept the total food per hour the same but changed when food could arrive inside each interval.

02

What they found

Response rate followed the local chance of food, not the overall rate.

If food was most likely early in the interval, birds pecked fast early.

If food was most likely late, birds waited then pecked fast late.

03

How this fits with other research

Emmelkamp et al. (1986) later used two VI schedules side-by-side and saw the same rule: local rate stayed steady while time spent in each key shifted with food odds.

Reiss et al. (1982) showed that shorter schedule parts only sped up pecking when those short parts also held richer food, again pointing to local odds.

Palya (1992) zoomed in further and found pigeons burst at about three pecks per second inside every gap between foods, revealing a micro-rhythm that sits inside the local rate.

04

Why it matters

When you build a reinforcement schedule, think moment-to-moment, not just hour-to-hour.

A child may get the same total tokens each day, but placing most tokens right after the target response will push that response higher.

Check your data minute-by-minute; if responding lags, shift the next reward closer in time, not just more often across the day.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Plot the last ten rewards against the response that came right before them; if rewards cluster after long pauses, tighten the interval.

02At a glance

Intervention
not applicable
Design
single case other
Population
not specified
Finding
not reported

03Original abstract

Interval schedules of reinforcement maintained pigeons' key-pecking in six experiments. Each schedule was specified in terms of mean interval, which determined the maximum rate of reinforcement possible, and distribution of intervals, which ranged from many-valued (variable-interval) to single-valued (fixed-interval). In Exp. 1, the relative durations of a sequence of intervals from an arithmetic progression were held constant while the mean interval was varied. Rate of responding was a monotonically increasing, negatively accelerated function of rate of reinforcement over a range from 8.4 to 300 reinforcements per hour. The rate of responding also increased as time passed within the individual intervals of a given schedule. In Exp. 2 and 3, several variable-interval schedules made up of different sequences of intervals were examined. In each schedule, the rate of responding at a particular time within an interval was shown to depend at least in part on the local rate of reinforcement at that time, derived from a measure of the probability of reinforcement at that time and the proximity of potential reinforcements at other times. The functional relationship between rate of responding and rate of reinforcement at different times within the intervals of a single schedule was similar to that obtained across different schedules in Exp. 1. Experiments 4, 5, and 6 examined fixed-interval and two-valued (mixed fixed-interval fixed-interval) schedules, and demonstrated that reinforcement at one time in an interval had substantial effects on responding maintained at other times. It was concluded that the rate of responding maintained by a given interval schedule depends not on the overall rate of reinforcement provided but rather on the summation of different local effects of reinforcement at different times within intervals.

Journal of the experimental analysis of behavior, 1968 · doi:10.1901/jeab.1968.11-s327