Persistence and relapse of reinforced behavioral variability
Variability reinforced by lag schedules behaves like an operant — it survives brief extinction and resurges when competing reinforcement ends.
01Research in Context
What this study did
Galizio et al. (2018) used a lag schedule to pay pigeons only when they produced new four-peck sequences.
After the birds learned the rule, the team stopped paying for variety and later removed pay for any alternative behavior.
They tracked whether the birds kept varying or fell back into old patterns.
What they found
The birds kept making new sequences even when variety no longer paid.
When the only other paid option ended, the new sequences bounced back strong.
The authors say this shows variability itself can act like an operant — it obeys the same extinction and resurgence rules as pressing a lever.
How this fits with other research
Doughty et al. (2015) saw the same pattern earlier: pigeons varied more when the rule said "be different" than when it just said "switch."
Dugdale et al. (2000) and Hopkinson et al. (2003) stretched the idea to kids — teens with autism and depressed college students both produced wider response sets once variability earned tokens.
Yet Nergaard et al. (2020) push back. Their review claims the extra variety is not a new operant — it is just the old responses dying out under extinction.
The two views clash on paper, but the data sets differ: Galizio used full extinction and resurgence tests; the review pooled studies that rarely ran those checks.
Why it matters
If variability is an operant, you can reinforce it directly. Program a lag schedule when you want a learner to try new ways to solve a task — whether that is a new play sequence, a new sentence, or a new route through a worksheet.
Watch for resurgence: when you stop reinforcing the new variation, it may dip, then spike again once other sources of reinforcement end. Plan extra practice or signals during those dips to keep the skill alive.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Set a lag-3 schedule during play or academics: reinforce only if the client’s next response differs from the last two topographies.
02At a glance
03Original abstract
The present study examined persistence and relapse of reinforced behavioral variability in pigeons. Pigeons emitted four-response sequences across two keys. Sequences produced food according to a lag schedule, in which a response sequence was followed by food if it differed from a certain number of previous sequences. In Experiment 1, food was delivered for sequences that satisfied a lag schedule in both components of a multiple schedule. When reinforcement was removed for one component (i.e., extinction), levels of behavioral variability decreased for only that component. In Experiment 2, food was delivered for sequences satisfying a lag schedule in one component of a multiple schedule. In the other component, food was delivered at the same rate, but without the lag variability requirement (i.e., yoked). Following extinction, levels of behavioral variability returned to baseline for both components after response-independent food delivery (i.e., reinstatement). In Experiment 3, one group of pigeons responded on a lag variability schedule, and the other group responded on a lag repetition schedule. For both groups, levels of behavioral variability increased when alternative reinforcement was suspended (i.e., resurgence). In each experiment, we observed some evidence for extinction-induced response variability and for variability as an operant dimension of behavior.
Journal of the Experimental Analysis of Behavior, 2018 · doi:10.1002/jeab.309