ABA Fundamentals

Concurrent schedules: undermatching and control by previous experimental conditions.

Davison et al. (1979) · Journal of the experimental analysis of behavior 1979
★ The Verdict

Let every concurrent-schedule condition run at least six sessions so last week's payoff history does not fake today's preference data.

✓ Read this if BCBAs who use concurrent-operant preference assessments in clinic or classroom.
✗ Skip if Practitioners who only run single-schedule reinforcement programs.

01Research in Context

01

What this study did

Researchers ran pigeons on concurrent VI-VI schedules. They switched the payoff rates and watched what happened next.

They wanted to know how many sessions the birds kept acting like the old schedule was still in force.

02

What they found

For the first five or six sessions the birds 'undermatched' — they pecked less toward the richer side than the math said they should.

By session six the bias was mostly gone. The old reinforcement history had washed out.

03

How this fits with other research

Landon et al. (2002) extends this idea. They split history into two parts: a quick post-reinforcer hop and a slow ratio-tracking drift. C et al. found the slow part; Jason et al. showed the fast part also matters, especially when ratios are extreme.

Hopkins et al. (1977) came first. They showed undermatching is normal, not noise. C et al. explained one reason why — carry-over from the last condition.

Blue et al. (1971) looked at abrupt versus gradual changeover delays. They warned that sudden jumps can distort obtained rates. C et al. give the fix: stay in each condition long enough for those distortions to settle.

04

Why it matters

When you run concurrent schedules in the clinic — say, a child chooses between two tasks — give at least six stable sessions before you record 'true' preference. If you switch too soon, you may see undermatching that is just old history, not a failure to match. Let the past wash out first.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Count six full stable sessions before you trust the numbers in your concurrent-choice assessment.

02At a glance

Intervention
not applicable
Design
single case other
Sample size
5
Population
neurotypical
Finding
not reported

03Original abstract

Five pigeons were trained on concurrent variable-interval schedules. A series of conditions in which the ratio of reinforcement rates on two keys was progressively increased and then decreased was arranged twice. The birds were then exposed to an irregular sequence of conditions. Each condition in which reinforcement was available on both keys lasted six sessions. Performance in the first, third, and sixth sessions after a condition change was analyzed. Following a condition change, preference was biased toward the preference in the last condition, but this effect largely disappeared before the sixth session of training. The birds' preferences also appeared less sensitive to reinforcement rates in early sessions after a transition. Preference in a session was a function of both the reinforcements in that session and the reinforcements obtained in as many as four or five previous sessions. The effects of reinforcements in previous sessions could be summarized by the performance in the immediately preceding session, giving a relatively simple relation between present performance and a combination of present reinforcement and prior session performance. While such hysteresis could cause undermatching when only a small number of sessions are arranged in a condition, undermatching in a stable-state performance probably arises elsewhere.

Journal of the experimental analysis of behavior, 1979 · doi:10.1901/jeab.1979.32-233