Intermittent reinforcement of an interresponse time.
Reinforcement can directly sculpt the pause between responses, not just the response itself.
01Research in Context
What this study did
The team worked with rats in a small box. Every time a rat pressed the lever, the clock started. If the next press came between 1 and 2 seconds, the rat got food.
They kept this up for many sessions. They tracked how often the rats pressed and how long they waited between presses.
What they found
The rats quickly learned to wait about 1.5 seconds between presses. Their response rate shifted to match the 1–2 second window.
When the rule stopped, the timing drifted back. The study showed that the pause between responses is itself an operant unit you can reinforce.
How this fits with other research
Schwarz et al. (1970) saw that under fixed-interval schedules the sixth response after food lands at a steady pace. M et al. added a twist: they picked one slice of time and paid only for that, proving the slice can be sculpted at will.
Davison (1969) mapped a dip-peak pattern of interresponse times under fixed-ratio. M et al. show you can delete or build such patterns by reinforcing the exact pause you want.
Bowe et al. (1983) later used tight deadlines to speed human reaction times. This extends M et al.’s rat finding to people: temporal rules can fine-tune speed across species.
Why it matters
If you want a child to wait two seconds before asking again, or a learner to slow rapid stereotypy, reinforce the exact pause you like. Start by counting the current interresponse times, pick the window you want to see more of, and deliver praise or tokens only when the behavior lands inside that window. You can thin the schedule later; the timing will stick.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Measure the client’s interresponse times, pick a 1-second band you want more of, and reinforce every response that lands inside it.
02At a glance
03Original abstract
Rats were exposed to schedules in which reinforcement was contingent upon the emission of a 1.0- to 2.0-sec interresponse time. The rate of emission and the temporal distribution of this interresponse time was recorded. Several different contingencies between the emission of the interresponse time and reinforcement were examined. Both the rate of emission and the temporal distribution of the 1.0- to 2.0-sec interresponse time varied as a function of the schedule on which it was reinforced. This finding, which suggests that an interresponse time behaves as other operants, has implications for the analysis of conventional reinforcement schedules in terms of the differential reinforcement of interresponse times.
Journal of the experimental analysis of behavior, 1972 · doi:10.1901/jeab.1972.17-67