Behavior under large values of the differential-reinforcement-of-low-rate schedule.
Animals stretch pause times under DRL, but the path is curved and learner-specific.
01Research in Context
What this study did
The team tested pigeons and rats on a DRL schedule. DRL means the bird or rat must wait a set time between responses. If they press too soon, no food.
They made the wait longer and longer across sessions. They recorded how long the animals actually waited and how many extra pecks they made.
What they found
As the required wait grew, the animals waited longer, but not in a straight line. The increase followed a power curve: big jumps at first, then smaller ones.
Different species hit different sweet spots. Pigeons kept extra pecks low. Rats made more useless pecks and earned fewer treats.
How this fits with other research
Killeen (2023) folds these curves into one math rule. His 2023 theory says richer schedules build "behavioral momentum." The 1974 data now sit inside that bigger equation.
Blackman (1970) also ran birds on time rules, but with fixed-ratio schedules. Both papers show animals stretch or shrink pauses to fit the clock, proving timing is flexible, not fixed.
Reed (1991) saw local contrast inside multiple schedules. The DRL study adds a different angle: timing itself shifts when the rule value changes, not just response rate.
Why it matters
If you use DRL to slow a client's hand-flap or rapid questioning, start with a short wait. Lengthen it in small steps that follow a curve, not a line. Watch for species-like differences: some learners will pad the pause cleanly, others will need extra help to stop bonus responses.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Set the first DRL threshold at half the client's current inter-response time, then raise it a large share per session while plotting the pause length on a log scale to check the power-curve fit.
02At a glance
03Original abstract
Pigeons pecked a key and rats pressed a lever for food reinforcement under large values of the differential-reinforcement-of-low-rate schedule. Each subject was tested under 10 different schedule values ranging from 1 to 45 min and was exposed to each schedule value at least twice. The mean interresponse time and mean interreinforcement time increased with the schedule value according to power functions. Response-probability functions were computed for schedule values below 20 min and showed an increase in response probability as a function of time since the last response in most cases. Mean responses per reinforcer increased as a function of schedule value for the rats, but decreased as a function of schedule value for the pigeons. The proportion of responses with interresponse times shorter than 1 sec were an increasing function of schedule value for the pigeons, but did not vary as a function of schedule value for the rats.
Journal of the experimental analysis of behavior, 1974 · doi:10.1901/jeab.1974.22-121