Stock optimizing: maximizing reinforcers per session on a variable-interval schedule.
When responses are scarce, animals ration them to maximize total reinforcers per session.
01Research in Context
What this study did
Rats pressed a lever on a VI 3-min schedule. They could earn up to 20 food pellets per 60-min session.
In one test the session ended after 60 min. In another test the session ended after 200 presses. The rats had to 'budget' their presses to get the most food.
What they found
When food was tight, rats pressed faster in the time-limited sessions. They tried to grab every pellet.
When food was tight in the response-limited sessions, the same rats pressed slower. They saved presses to make each pellet count.
How this fits with other research
Davis et al. (1972) saw rates drop when delays grew past 3 s. A et al. add a new rule: if presses are capped, animals will wait to keep the contingency clear.
Barnard et al. (1977) showed that even a 4-s gap kills autoshaped pecking. The 1993 study says the bird (or rat) does the math and chooses to pause instead of burn responses.
Corrigan et al. (1998) blamed hopper-watching for rate loss under unsignaled delays. A et al. show the loss can be strategic, not just distraction.
Why it matters
Your client may 'slow down' on DRL or limited-response tasks, not from boredom but from optimization. Check if the task has a hidden ceiling. If you want steady responding, remove the cap or make the payoff for extra responses worth the cost.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Count the client's responses today; if you use a ceiling (e.g., '10 tokens max'), raise it and see if rate climbs.
02At a glance
03Original abstract
In Experiment 1, 2 monkeys earned their daily food ration by pressing a key that delivered food according to a variable-interval 3-min schedule. In Phases 1 and 4, sessions ended after 3 hr. In Phases 2 and 3, sessions ended after a fixed number of responses that reduced food intake and body weights from levels during Phases 1 and 4. Monkeys responded at higher rates and emitted more responses per food delivery when the food earned in a session was reduced. In Experiment 2, monkeys earned their daily food ration by depositing tokens into the response panel. Deposits delivered food according to a variable-interval 3-min schedule. When the token supply was unlimited (Phases 1, 3, and 5), sessions ended after 3 hr. In Phases 2 and 4, sessions ended after 150 tokens were deposited, resulting in a decrease in food intake and body weight. Both monkeys responded at lower rates and emitted fewer responses per food delivery when the food earned in a session was reduced. Experiment 1's results are consistent with a strength account, according to which the phases that reduced body weights increased food's value and therefore increased subjects' response rates. The results of Experiment 2 are consistent with an optimizing strategy, because lowering response rates when food is restricted defends body weight on variable-interval schedules. These contrasting results may be attributed to the discriminability of the contingency between response number and the end of a session being greater in Experiment 2 than in Experiment 1. In consequence, subjects lowered their response rates in order to increase the number of reinforcers per session (stock optimizing).
Journal of the experimental analysis of behavior, 1993 · doi:10.1901/jeab.1993.59-389