Run length, visit duration, and reinforcers per visit in concurrent performance.
Reinforcement odds alone shape how long clients stick with an activity and how many responses they give before switching.
01Research in Context
What this study did
Researchers set up two side-by-side keys for pigeons. Each key paid off on its own VI schedule. The birds could stay on one key or hop to the other at any time.
The team recorded every peck and switch. They measured how long each bird stayed on a key (visit duration) and how many pecks it made before leaving (run length).
What they found
Visit durations and run lengths tracked the programmed payoff odds. When the left key paid twice as often, birds stayed there about twice as long and pecked about twice as much before switching.
The numbers fit the generalized matching law. Stay/switch contingencies alone produced orderly, predictable choice patterns.
How this fits with other research
Mechner (1958) first showed that run lengths depend on what happened before. Macdonall (1998) adds the matching law to that story: the longer runs aren't random; they scale with reinforcement rate.
Gabriels et al. (2001) extend the idea. They used log-survivor plots to prove that reinforcement boosts two things: starting a bout and keeping it going. Macdonall (1998) shows those same bouts in plain time and count units.
Hachiga et al. (2010) look like they clash. They broke up long runs on purpose with a special algorithm. Macdonall (1998) lets long runs grow naturally. Both studies agree: run length is under environmental control; the difference is whether you let it grow or step in to shrink it.
Why it matters
You can now see matching in micro-behavior, not just overall rates. If a client hops between tasks, measure seconds or responses per bout. When the richer task pays more, longer bouts there tell you matching is working. Use that data to fine-tune reinforcement before problem switching appears.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Time one client’s stays and switches across two tasks; plot duration against payoff rate to see matching in action.
02At a glance
03Original abstract
The contingencies in each alternative of concurrent procedures consist of reinforcement for staying and reinforcement for switching. For the stay contingency, behavior directed at one alternative earns and obtains reinforcers. For the switch contingency, behavior directed at one alternative earns reinforcers but behavior directed at the other alternative obtains them. In Experiment 1, responses on the main lever, in S1, incremented stay and switch schedules and obtained a stay reinforcer when it became available. Responses on the switch lever changed S1 to S2 and obtained switch reinforcers when available. In S2, neither responses on the main lever nor on the switch lever were reinforced, but a switch response changed S2 to S1. Run lengths and visit durations were a function of the ratio of the scheduled probabilities of reinforcement (staying/switching). From run lengths and visit durations, traditional concurrent performance was synthesized, and that synthesized performance was consistent with the generalized matching law. Experiment 2 replicated and extended this analysis to concurrent variable‐interval schedules. The synthesized results challenge any theory of matching that requires a comparison among the alternatives.
Journal of the experimental analysis of behavior, 1998 · doi:10.1901/jeab.1998.69-275