Punished and unpunished responding in multiple variable-interval schedules.
Punishment cuts responding by the same percent on both rich and lean VI schedules, and the birds keep the new ratio after shocks end.
01Research in Context
What this study did
The team placed pigeons on two VI schedules at once. One schedule paid off often. The other paid off rarely.
After the birds pecked steadily, every peck in both schedules met a quick shock. The shocks stayed on for many sessions. Then they stopped.
The researchers watched if the birds learned to peck more in the rich side and less in the lean side.
What they found
Shocks cut pecking by the same percent in both schedules. Rich or lean, the drop looked identical.
When the shocks ended, the birds still pecked in the same ratio. They did not return to old levels, but the small error rate held steady.
How this fits with other research
Kruper (1968) ran the same year with the same result. Birds on concurrent VI schedules also dropped the same percent, showing the effect is real.
Schroeder et al. (1969) moved the test to adult humans. People on VI schedules also slowed down when shocks arrived, proving the law crosses species.
McKearney (1970) added VR schedules and saw a twist. VR responding fell faster than VI as shock grew, and VI bounced back sooner when shock eased. So equal suppression only holds when both sides are interval schedules.
Why it matters
For BCBAs, the key point is that punishment strength, not reinforcement size, drives the drop. If you add a punisher to two behaviors, expect the same cut even if one earns more tokens. Watch for slow recovery once the punisher stops, and plan extra teaching trials to bring rates back.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Track response rates on two VI behaviors before and after adding a punisher; expect equal suppression and plan extra post-punishment teaching.
02At a glance
03Original abstract
The performance of rats trained on multiple variable-interval schedules was examined before, during, and after punishment. The same linear function related relative response rates to relative density of reinforcement both in the presence and absence of punishment. Equal relative suppression was seen in both the high and low reinforcement density components. The intercept value of the function was zero. Each component of the schedule was programmed on a separate lever: thus during any component, there was an opportunity for responses on the nonoperative lever (errors). The proportions of these errors declined to a near-zero value during punishment and did not regain their prepunishment values after punishment was removed, suggesting that some discrimination learning occurred during punishment. Recovery of response rate during punishment was seen only where a greater-than-zero probability of reinforcement was associated with the response.
Journal of the experimental analysis of behavior, 1968 · doi:10.1901/jeab.1968.11-147