Response-reinforcer dependency location in interval schedules of reinforcement.
Tiny timing shifts of embedded reinforcers create local response hills that session averages miss.
01Research in Context
What this study did
Wanchisen et al. (1989) used pigeons on fixed-interval schedules. They moved tiny response-dependent reinforcers to different spots inside each session. Then they watched where the birds pecked fastest.
What they found
Peck rates spiked right where the extra reinforcer lived. Move the reinforcer, move the spike. Overall session averages stayed flat, so the bump was invisible unless you looked second-by-second.
How this fits with other research
Davison et al. (1968) first showed local probability, not overall rate, drives responding. Wanchisen et al. (1989) add that the exact second of dependency matters too.
Chandler et al. (1992) widened the lens: response rates climb then fall across the whole session. The local spike A found sits on top of that bigger wave.
Cicerone (1976) looks like a contradiction: putting the reinforcer at the start of an FI suppressed later rates. The key difference is procedure: Cicerone (1976) studied contrast between components; Wanchisen et al. (1989) studied a hidden reinforcer inside one component. Same timing tool, different context, opposite outcome.
Why it matters
Your data system may hide hot spots. Graph minute-by-minute, not just session totals. If you embed extra reinforcers—for praise, tokens, or breaks—track where they land; you might be accidentally boosting or suppressing nearby responses. Shift the timing and you can sculpt smoother or sharper curves within the same schedule.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Mark your 5-minute bins during the next FI session; see if praise or tokens create hidden peaks.
02At a glance
03Original abstract
In five experiments we studied the effects on pigeons' key pecking of the location of four or more successive response-dependent reinforcers imbedded in a schedule arranging otherwise response-independent reinforcers. In Experiment 1, high local response rates early in the session were extended farther into the session as the number of response-dependent reinforcers at the beginning of the session increased. A block of four successive response-dependent reinforcers then was scheduled at the beginning, middle, or end of the session (Experiment 2) resulting in higher local response rates at those times in the session when the response-dependent reinforcers were arranged. When placed in random locations in successive sessions (Experiment 3), uniform local rates occurred throughout the session. In Experiments 1, 2, and 3, delivery of the remaining response-independent reinforcers was precluded until the response-dependent reinforcers were collected. Experiment 4 was similar to Experiments 1 and 2, except that all response-independent reinforcers occurred irrespective of whether the response-dependent reinforcers had been collected. This yielded results similar to those obtained in the first two experiments. In Experiment 5, responding early in the session had no consequence other than allowing access to the schedule of response-independent food delivery. As in the first experiment, local rates generally were higher early in the session. The results indicate that the location of response-reinforcer dependencies precisely control behavior and that such effects often are not captured by descriptions of behavior in terms of overall response rates.
Journal of the experimental analysis of behavior, 1989 · doi:10.1901/jeab.1989.51-101