ABA Fundamentals

Choice with frequently changing food rates and food ratios.

Baum et al. (2014) · Journal of the experimental analysis of behavior 2014
★ The Verdict

A steady payoff schedule pulls behavior like a magnet, so learners may stick with the safe key and only briefly test new options.

✓ Read this if BCBAs who use concurrent reinforcement schedules in skill-building or preference assessments.
✗ Skip if Clinicians working solely with fixed-ratio or single-schedule programs.

01Research in Context

01

What this study did

Pigeons pecked two keys for food. One key always paid off on the same VI plan. The other key’s payoff odds changed every session.

Researchers watched how the birds split pecks second-to-second and across the whole hour.

02

What they found

Birds quickly leaned on the steady key when the changing key went lean.

Over the full hour the birds still matched the overall payoff ratio, but moment-to-moment they used a fix-and-sample rule: stay on the safe key, briefly test the other.

03

How this fits with other research

Rilling et al. (1969) showed time allocation follows payoff ratios. The new study keeps that big-picture match but adds a wrinkle: a fixed schedule acts like a bright streetlight—animals crowd it when the other side darkens.

Mellitz et al. (1983) proved pigeons track the best immediate odds. The 2014 paper extends that idea; the fixed key gives a stable reference point, so birds sample the risky key only long enough to check if odds improved.

Kirkpatrick-Steger et al. (1996) saw choice drift when different foods are used. Here, drift comes from schedule constancy, not food type, showing within-session shifts have more than one cause.

04

Why it matters

Your client may “fix” on a reliable reinforcer and only briefly sample new tasks. If you want broader responding, keep reinforcement rates similar across choices or teach a rule to break the fix-and-sample habit.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Balance the reinforcement rates across choices within a session, then watch if the learner still fixates on one option.

02At a glance

Intervention
not applicable
Design
single case other
Population
other
Finding
not reported

03Original abstract

In studies of operant choice, when one schedule of a concurrent pair is varied while the other is held constant, the constancy of the constant schedule may exert discriminative control over performance. In our earlier experiments, schedules varied reciprocally across components within sessions, so that while food ratio varied food rate remained constant. In the present experiment, we held one variable-interval (VI) schedule constant while varying the concurrent VI schedule within sessions. We studied five conditions, each with a different constant left VI schedule. On the right key, seven different VI schedules were presented in seven different unsignaled components. We analyzed performances at several different time scales. At the longest time scale, across conditions, behavior ratios varied with food ratios as would be expected from the generalized matching law. At shorter time scales, effects due to holding the left VI constant became more and more apparent, the shorter the time scale. In choice relations across components, preference for the left key leveled off as the right key became leaner. Interfood choice approximated strict matching for the varied right key, whereas interfood choice hardly varied at all for the constant left key. At the shortest time scale, visit patterns differed for the left and right keys. Much evidence indicated the development of a fix-and-sample pattern. In sum, the procedural difference made a large difference to performance, except for choice at the longest time scale and the fix-and-sample pattern at the shortest time scale.

Journal of the experimental analysis of behavior, 2014 · doi:10.1002/jeab.70