Reinforcing staying and switching while using a changeover delay.
A short pause after switching makes learners stay longer where they are, proving local payoff rules the moment.
01Research in Context
What this study did
The team set up two levers for lab pigeons. Each lever paid off on its own schedule.
A 3-second changeover delay sat between the levers. If a bird switched, the first peep on the new lever had to wait three seconds before it could count toward food.
The researchers recorded how long each bird stayed on one lever and how many responses it strung together before moving again.
What they found
With the delay in place, birds stayed longer and pecked more before switching.
The extra time and responses lined up with the local payoff ratio at each lever. The data backed a local, not global, view of choice.
How this fits with other research
Stubbs et al. (1970) first showed that short bursts right after a switch drive matching. MacDonall (2003) extends that idea by proving the same local rule also sets run length and visit time.
Pliskoff et al. (1981) varied the number of responses needed to switch and saw under- or over-matching. The 2003 study swaps the response requirement for a time delay and still finds matching shifts, showing the rule holds across different switch costs.
Matson et al. (2004) support the generalized matching law with noncontingent reinforcement. MacDonall (2003) seems to push back, yet the clash is only on paper. L et al. looked at overall rates; S zoomed in on moment-by-moment control. Both can be true at once.
Why it matters
If you want a client to stick with a task longer, slip a brief pause before reinforcement becomes available after a switch. The tiny delay makes the current option pay off more in that moment, so the learner hangs around. Try it when you need longer work bursts during concurrent work or free-choice stations.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Add a 3-second pause before reinforcement can start right after a client moves to a new center or task.
02At a glance
03Original abstract
Performance on concurrent schedules can be decomposed to run lengths (the number of responses before switching alternatives), or visit durations (time at an alternative before switching alternatives), that are a function of the ratio of the rates of reinforcement for staying and switching. From this analysis, a model of concurrent performance was developed and examined in two experiments. The first exposed rats to variable-interval schedules for staying and for switching, which included a changeover delay for reinforcers following a switch. With the changeover delay, run lengths and visit durations were functions of the ratios of the rates of reinforcement for staying and for switching, as found by previous research not using a changeover delay. The second directly assessed the effect of a changeover delay on run lengths and visit durations. Each component of a multiple schedule consisted of equivalent stay and switch schedules but only one component included a changeover delay. Run lengths and visit durations were longer when a changeover delay was used. Because visit duration is the reciprocal of changeover rate, these results are consistent with the established finding that a changeover delay reduces the frequency of switching. Together these results support the local model of concurrent performance as an alternative to the generalized matching law as a model of concurrent performance. The local model may be preferred when accounting for more molecular aspects of concurrent performance.
Journal of the experimental analysis of behavior, 2003 · doi:10.1901/jeab.2003.79-219