A Markov model description of changeover probabilities on concurrent variable-interval schedules.
Pigeon choice on concurrent VIs follows a simple rate-matching rule with no memory of past choices—use this as your baseline when testing more complex human choice models.
01Research in Context
What this study did
The team watched pigeons peck two keys. Each key paid off on its own variable-interval (VI) schedule. The birds could hop back and forth any time.
Every peck was recorded. The researchers asked: does where the bird just pecked change where it pecks next? They fit the stream of choices to a Markov chain model.
What they found
The model showed no memory. Only the current reinforcement rate on each key predicted the next peck. Past choices did not matter.
Switching probabilities lined up perfectly with the VI pay rates. The birds acted as if each choice started fresh.
How this fits with other research
Pilgrim et al. (2000) kept the same VI setup but made one side end after a fixed number of pays and the other after a random number. Birds still followed the Markov rule for moment-to-moment hops, yet they later showed stronger preference and more resistance to disruption for the fixed side. The 1979 memoryless pattern is the baseline; schedule structure adds a second layer.
Iwata (1993) asked whether pigeons track scheduled rates (molar) or only the pays they actually get (local). Using concurrent VIs, the study found context matters: scheduled rates drive choice when both keys are present, echoing the 1979 finding that relative rate controls switching.
Killeen (2023) folds the 1979 result into a larger math frame. The review says richer VI schedules build behavioral momentum, making responses stick longer when conditions worsen. The Markov switch data are now one piece of a unifying equation.
Why it matters
When you test choice with clients, start by checking if their switching matches current reinforcer rates. If it does, you have a clean, memory-free baseline. Then you can add complications like richer schedules, signaled changes, or alternative topographies and see what extra variables shift the pattern. The 1979 paper gives you the zero-line to beat.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Graph your client’s minute-by-minute switching between two tasks; check if the proportion matches the relative reinforcement rate delivered.
02At a glance
03Original abstract
The primary data were peck-by-peck sequential records of four pigeons responding on several different concurrent variable-interval schedules. According to the hypothesis that the subject chooses the alternative with the highest probability of reinforcement at the moment, response-by-response performance in concurrent schedules should show sequential dependencies. However, such dependencies were not found, and it was possible to describe molecular-level performance with simple Markov chain models. The Markov model description implies that the momentary changeover probabilities were proportional to the overall relative reinforcement frequencies, and that changeover probabilities did not change as a function of previous responding. A second finding was that although a changeover-delay procedure was omitted, relative response frequencies closely approximated relative reinforcement frequencies.
Journal of the experimental analysis of behavior, 1979 · doi:10.1901/jeab.1979.31-41