Some determinants of remote behavioral history effects in humans.
Old reinforcement schedules can lie dormant and then flare back up when familiar cues return.
01Research in Context
What this study did
Hirai et al. (2011) asked adults without disabilities to press a button for points.
First they worked under two old schedules: FR that paid every 5 presses, and DRL that paid only if they waited 11 s between presses.
Next the team switched everyone to the same FI 30-s schedule for many sessions.
Finally they flipped back to the old FR and DRL to see if the first history would wake up.
What they found
Right after the switch to FI, people with an FR past pressed fast and people with a DRL past pressed slow.
The difference faded the longer they stayed on FI.
When the experimenters brought the old schedules back, the fast-versus-slow pattern popped up again.
Remote history was sleeping, not gone.
How this fits with other research
Okouchi (2003) saw the same FR-fast, DRL-slow result on FI, so the 2011 study is a clean replication in humans.
Ley (2001) looks like a contradiction: rats lost the history effect after 80-100 FI sessions.
The rat study never returned to the original schedules; the 2011 team showed the history re-appears only when you probe with the old cues, so both papers can be right.
Ritchey et al. (2021) extends the idea: longer initial training makes behavior bounce back harder later, whether it is touchscreen swipes or button presses.
Why it matters
Your client may arrive with a hidden history of dense reinforcement or long waits.
Probe with brief reversals to see if old response speeds return; if they do, program extra practice on the new pace and thin the history schedule slowly.
Track data across setting changes—history can nap and then wake up when staff or tasks switch.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Run a one-minute reversal to the prior schedule; if speed jumps or drops, add five extra extinction or wait trials before the next session.
02At a glance
03Original abstract
Undergraduates were exposed to a series of reinforcement schedules: first, to a fixed-ratio (FR) schedule in the presence of one stimulus and to a differential-reinforcement-of-low-rate (DRL) schedule in the presence of another (multiple FR DRL training), then to a fixed-interval (FI) schedule in the presence of a third stimulus (FI baseline), next to the FI schedule under the stimuli previously correlated with the FR and DRL schedules (multiple FI FI testing), and, finally, to a single session of the multiple FR DRL schedule again (multiple FR DRL testing). Response rates during the multiple FI FI schedule were higher under the former FR stimulus than under the former DRL stimulus. This effect of remote histories was prolonged when either the number of FI-baseline sessions was small or zero, or the time interval between the multiple FR DRL training and the multiple FI FI testing was short. Response rates under these two stimuli converged with continued exposure to the multiple FI FI schedule in most cases, but quickly differentiated when the schedule returned to the multiple FR DRL.
Journal of the experimental analysis of behavior, 2011 · doi:10.1901/jeab.2011.96-387