ABA Fundamentals

Development and maintenance of choice in a dynamic environment.

Rodewald et al. (2010) · Journal of the experimental analysis of behavior 2010
★ The Verdict

Choice patterns look stable early, yet preference pulses keep growing long after you think the bird—or client—has decided.

✓ Read this if BCBAs running long-term concurrent-schedule reinforcer assessments or teaching choice-making to learners with developmental disabilities.
✗ Skip if Clinicians who only need a quick one-day preference probe.

01Research in Context

01

What this study did

Researchers watched pigeons peck two keys for food over the study period. Each key paid off on its own variable-interval schedule. The team tracked every peck to see how choice patterns grew over time.

They used fine-grained visit analyses, not just session totals. This let them spot small shifts that averages hide.

02

What they found

Birds settled into a basic preference within the first few days. But the size of post-reinforcer 'preference pulses' kept growing even after three months.

Continuation effects—brief stretches of sticking with the just-paid side—also kept getting stronger. In short, the birds never stopped fine-tuning their choices.

03

How this fits with other research

Carr et al. (1985) warned that session-wide ratios can mask local dynamics. Busch et al. (2010) now show the same masking happens across months—aggregates miss steady growth in pulse size.

Gomes-Ng et al. (2017) later argued visit analyses beat residual pulse corrections. The 2010 data support that view: only visit-level tracking revealed the slow upward creep.

Boutros et al. (2011) split reinforcers into immediate discriminative effects versus long-term strengthening. Busch et al. (2010) supply the longitudinal proof—single reinforcers bias the next response, but strings of them slowly enlarge preference pulses.

04

Why it matters

When you run concurrent-operant preference assessments, do not stop after a week. Keep probes brief, but repeat them across months if you want the true steady state. Plot local visit patterns, not just overall percentages—those tiny post-reinforcer bumps you see in week one may double by week ten, guiding better reinforcer selection and schedule thinning.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Graph visit-level preference pulses across at least five separate days; if the post-reinforcer bump keeps rising, keep assessing before you lock in the schedule.

02At a glance

Intervention
not applicable
Design
single case other
Sample size
4
Population
not specified
Finding
not reported

03Original abstract

Four pigeons were exposed to a concurrent procedure similar to that used by Davison, Baum, and colleagues (e.g., Davison & Baum, 2000, 2006) in which seven components were arranged in a mixed schedule, and each programmed a different left∶right reinforcer ratio (1∶27, 1∶9, 1∶3, 1∶1, 3∶1, 9∶1, 27∶1). Components within each session were presented randomly, lasted for 10 reinforcers each, and were separated by 10-s blackouts. These conditions were in effect for 100 sessions. When data were aggregated over Sessions 16-50, the present results were similar to those reported by Davison, Baum, and colleagues: (a) preference adjusted rapidly (i.e., sensitivity to reinforcement increased) within components; (b) preference for a given alternative increased with successive reinforcers delivered via that alternative (continuations), but was substantially attenuated following a reinforcer on the other alternative (a discontinuation); and (c) food deliveries produced preference pulses (immediate, local, increases in preference for the just-reinforced alternative). The same analyses were conducted across 10-session blocks for Sessions 1-100. In general, the basic structure of choice revealed by analyses of data from Sessions 16-50 was preserved at a smaller level of aggregation (10 sessions), and it developed rapidly (within the first 10 sessions). Some characteristics of choice, however, changed systematically across sessions. For example, effects of successive reinforcers within a component tended to increase across sessions, as did the magnitude and length of the preference pulses. Thus, models of choice under these conditions may need to take into account variations in behavior allocation that are not captured completely when data are aggregated over large numbers of sessions.

Journal of the experimental analysis of behavior, 2010 · doi:10.1901/jeab.2010.94-175