Dynamical concurrent schedules.
Behavior can track reinforcer ratios that change every few seconds—no need for long stable baseline ratios.
01Research in Context
What this study did
The team used pigeons on two side-by-side VI keys. Every five minutes the VI values flipped. One key went from VI 15 s to VI 480 s while the other did the opposite.
They watched how fast the birds changed their peck ratio as the payoff ratio changed.
What they found
The birds kept up. Their response ratio stayed close to the new reinforcer ratio with a sensitivity of about 0.62. No long baseline needed.
In plain words, the matching law still worked even when the odds moved every few seconds.
How this fits with other research
Macdonald et al. (1973) first showed matching with steady VI-VI schedules. The new study adds fast, repeated shifts to that story.
Davison et al. (1989) found no change in choice when overall reinforcer rate feedback was added. That looks like a clash, but they tested a different tweak: extra feedback, not moving ratios. The two papers talk past each other, so no real fight exists.
Bell (1999) looked at tiny time gaps between responses. Together these papers show choice can flex both in the moment and across fast schedule swaps.
Why it matters
You can stop waiting for long, stable baselines when you probe choice. Shift the payoff odds within a session and watch the client adjust. Quick reversals save time and show if reinforcement drives the behavior right away.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Flip the VR or VI values of two choices mid-session and watch the response ratio shift within five minutes.
02At a glance
03Original abstract
Previous work on the matching law has predominantly focused on the molar effects of the contingency by examining only one reinforcer ratio for extended periods. Responses are distributed as a function of reinforcer ratios under these static conditions. But the outcome under a dynamic condition in which reinforcer ratios change continuously has not been determined. The present study implemented concurrent variable-interval schedules that changed continuously across a fixed 5-min trial. The schedules were reciprocally interlocked. The variable interval for one key changed continuously from a variable-interval 15-s to a variable-interval 480-s, while the schedule for the other key changed from a variable-interval 480-s to a variable-interval 15-s. This dynamical concurrent schedule shifted behavior in the direction of matching response ratios to reinforcer ratios. Sensitivities derived from the generalized matching law were approximately 0.62, the mean absolute bias was approximately 0.11, and r2s were approximately 0.86. It was concluded that choice behavior can come to adapt to reinforcer ratios that change continuously over a relatively short time and that this change does not require extensive experience with a fixed reinforcer ratio. The results were seen as supportive of the view that all behavior constitutes choice.
Journal of the experimental analysis of behavior, 2003 · doi:10.1901/jeab.2003.79-1