Humans' choices in situations of time-based diminishing returns: effects of fixed-interval duration and progressive-interval step size.
Adults pick the schedule that delivers the most reinforcement per minute, not the one they talk about.
01Research in Context
What this study did
Adults picked between two buttons. One button paid after a fixed wait. The other wait grew longer each time they pressed it.
The researchers changed the fixed wait and the size of the growing steps. They watched when people switched away from the growing schedule.
No one got instructions about the best plan. The team wanted to see if people followed the richer schedule or their own rules.
What they found
People switched when the fixed schedule paid more often. They did not copy a rule they said out loud.
Their moves matched a simple count: pick the side that gives more reinforcers per minute.
How this fits with other research
Pierce et al. (1994) showed that spoken rules can block schedule control. The new study agrees: rules did not run the show.
Tantam et al. (1993) used the same buttons and found people quit the growing schedule faster if it reset. The 1996 paper keeps that result and adds the step-size test.
Madden et al. (2003) later tested risky delays in a foraging game. Adults again picked the option that gave more food per minute, backing the density rule found here.
Why it matters
When you set up token boards, DRO, or break timers, think density. Clients will drift toward the side that pays more often, even if they cannot say why. Watch the rate, not their words, to predict when they will switch.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Graph the pay rate of each option in your choice setup; swap the denser one to the harder task to keep clients there longer.
02At a glance
03Original abstract
Four adult humans made repeated choices between two time-based schedules of points exchangeable for money: a fixed-interval schedule and a progressive-interval schedule that began at 0 s and increased in fixed increments following each point delivered by that schedule. Under reset conditions, selection of the fixed schedule not only produced a point but also reset the progressive interval to 0 s. Reset conditions alternated with no-reset conditions, in which the progressive-interval duration was independent of fixed-interval choices. Fixed-interval duration and progressive-interval step size were varied independently across conditions. Subjects were exposed to all step sizes in ascending order at a given fixed-interval value before the value was changed. Switching from the progressive-interval schedule to the fixed-interval schedule was systematically related to fixed-interval duration, particularly under no-reset conditions. Switching occurred more frequently and earlier in the progressive-schedule sequence under reset conditions than under no-reset conditions. Overall, the switching patterns conformed closely to predictions of an optimization account based upon maximization of overall reinforcement density, and did not appear to depend on schedule-controlled response patterns or on verbal descriptions of the contingencies.
Journal of the experimental analysis of behavior, 1996 · doi:10.1901/jeab.1996.65-5