Choice between response rates.
Reinforcement rate sets how much response rate your client will prefer — lean schedules push for speed, rich schedules allow a relaxed pace.
01Research in Context
What this study did
Hawkes et al. (1974) worked with pigeons in a lab.
The birds pecked a single key.
Reinforcement rate changed across conditions.
Researchers watched which response rate the birds preferred.
What they found
At low reinforcement rates the birds strongly preferred the higher response rate.
At high reinforcement rates they were almost indifferent.
The curve matched the matching law: preference slid smoothly with reinforcement odds.
How this fits with other research
Rilling et al. (1969) showed the same matching pattern one step earlier.
They used time instead of response rate, yet the pigeon data still lined up.
Weiss et al. (2001) widened the reinforcer-ratio range and saw faster, bigger shifts.
Their results stretch the 1974 curve outward but keep the same slope.
Lecavalier et al. (2006) looked at inverted-U feedback functions with rats.
They argued birds and rats pick the peak, not just match rate.
The two views sound opposite, yet both fit: matching works when feedback is linear; optimization wins when the schedule bends.
Why it matters
Your client’s behavior follows the same rule.
If reinforcement is lean, even a high response cost looks worth it.
If reinforcement is dense, the client may slow down and still get plenty.
Check the schedule before you label a response rate ‘too low’ or ‘too high’.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Plot the client’s current reinforcement rate, then test if a small rate drop still keeps the target response strong — adjust to avoid unnecessary effort.
02At a glance
03Original abstract
Three pigeons were required to peck a single key at a higher and a lower rate, corresponding to two classes of shorter and longer concurrently reinforced interresponse times. Food reinforcers arranged by a single variable-interval schedule were randomly allocated to the two reinforced interresponse times. The absolute durations of reinforced interresponse times were varied while the total reinforcements per hour was held constant and the relative duration, i.e., the relative reciprocal, of the shorter reinforcer class was held constant at 0.70. Preference for the higher rate of responding, as measured by the relative frequency of responses terminating interresponse times in the shorter reinforced class, depended on the absolute reinforced response rates. Preference for the higher reinforced rate increased from a level of near-indifference (0.50) at high reinforced response rates, through the matching level (0.70) at intermediate reinforced response rates, to a virtually exclusive preference (>0.90) at low reinforced response rates. These results resemble corresponding preference functions obtained with two-key concurrent-chains schedules and thereby provide another sense in which it may be said that interresponse-time distributions from interval schedules estimate preference functions for the component response rates corresponding to different classes of reinforced interresponse times.
Journal of the experimental analysis of behavior, 1974 · doi:10.1901/jeab.1974.21-109