ABA Fundamentals

Concurrent fixed-interval variable-ratio schedules and the matching relation.

Rider (1981) · Journal of the experimental analysis of behavior 1981
★ The Verdict

Rats pick VR over FI even when it hurts their total haul, showing VR’s strong pull in concurrent schedules.

✓ Read this if BCBAs designing token boards or choice programs with mixed reinforcement schedules.
✗ Skip if Clinicians who only use simple fixed-ratio or interval schedules.

01Research in Context

01

What this study did

Rider (1981) let rats choose between two levers. One lever paid on a fixed-interval schedule. The other paid on a variable-ratio schedule.

The team logged how many times the rats pressed each lever and how long they stayed on each side.

02

What they found

The rats favored the VR lever. Their response ratios fell short of strict matching and showed a clear bias toward VR.

Time ratios also under-matched the pay-off ratios. In plain words, the animals worked harder for VR pellets even when the math said FI was just as good.

03

How this fits with other research

Davison et al. (1984) later asked if matching on VI-VR is even real. Their pigeons and computer models said the look of matching can pop out from the schedule itself, not from any internal rule. The 1981 rat data fit this warning: under-matching and bias hint the schedule, not the animal, is driving the numbers.

Green et al. (1999) zoomed in on single visits. They saw rats take quick peeks at the lean VI side while still earning most pellets from VR. That visit pattern lines up with P’s bias data and shows the animals may be optimizing the whole session, not matching moment to moment.

Herrnstein et al. (1979) ran the numbers first. Their pigeons lost about 60 reinforcers an hour by matching instead of maximizing. P’s rats show the same cost: they stick with VR even when it is not the richest deal second by second.

04

Why it matters

When you set up concurrent schedules for a client, remember that VR schedules pull behavior like a magnet. Clients may over-work the VR option and under-use the FI side even if the FI pays better overall. Watch for this bias, measure pay-offs, and be ready to re-balance the contingencies or add prompts so the client gets the most reinforcement for the least effort.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Track response and reinforcer counts on each side of your concurrent program; if VR is hogging behavior, raise the VR requirement or enrich the FI side.

02At a glance

Intervention
not applicable
Design
other
Sample size
5
Population
neurotypical
Finding
not reported

03Original abstract

Five rats responded under concurrent fixed-interval variable-ratio schedules of food reinforcement. Fixed-interval values ranged from 50-seconds to 300-seconds and variable-ratio values ranged from 30 to 360; a five-second changeover delay was in effect throughout the experiment. The relations between reinforcement ratios obtained from the two schedules and the ratios of responses and time spent on the schedules were described by Baum's (1974) generalized matching equation. All subjects undermatched both response and time ratios to reinforcement ratios, and all subjects displayed systematic bias in favor of the variable-ratio schedules. Response ratios undermatched reinforcement ratios less than did time ratios, but response ratios produced greater bias than did time ratios for every subject and for the group as a whole. Local rates of responding were generally higher on the variable-ratio than on the fixed-interval schedules. When responding was maintained by both schedules, a period of no responding on either schedule immediately after fixed-interval reinforcement typically was followed by high-rate responding on the variable-ratio schedule. At short fixed-interval values, when a changeover to the fixed-interval schedule was made, responding usually continued until fixed-interval reinforcement was obtained; at longer values, a changeover back to the variable-ratio schedule usually occurred when fixed-interval reinforcement was not forthcoming within a few seconds, and responding then alternated between the two schedules every few seconds until fixed-interval reinforcement finally was obtained.

Journal of the experimental analysis of behavior, 1981 · doi:10.1901/jeab.1981.36-317