Effects of two procedures for varying information transmission on observing responses.
A cue is only as good as its news—keep it 90 % predictive or lose the learner’s eyes.
01Research in Context
What this study did
The team tested two ways to change how much a cue tells the learner.
They used pigeons in a lab box. A light came on. Pecking the light gave a peek at the next cue.
The cue either predicted food 90 % of the time, 50 %, or 10 %. They counted how often the bird pecked to see the cue.
What they found
Birds pecked most when the cue was 90 % sure to signal food.
When the cue gave no better odds than a coin flip, pecking dropped.
The cue’s news value, not just the food after it, drove the response.
How this fits with other research
Duker et al. (1991) later asked the same question in a memory game. They showed that a long wait between cue and choice hurt accuracy more than a long wait between choice and food. Both papers say the same thing: keep the cue close and clear.
Born et al. (1974) looked at timing, not watching. They found that birds slowed down at the start of a slow-feed section only when the other section paid better. Again, the molar payoff rule, not tiny response tweaks, guided behavior.
Bacon-Prue et al. (1980) showed pigeons forget two-step chains after just two seconds. Together with Van Hemel (1973), the lesson is simple: time eats stimulus control.
Why it matters
For your client, make the SD a sure headline. If the red card means “tokens now” 9 out of 10 times, the learner will look, listen, and stay. If it only pays half the time, attention drifts. Check your reinforcement plan first, then blame motivation.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Audit one program: count how often the SD is followed by reinforcement this week—aim for 4 out of 5 trials.
02At a glance
03Original abstract
Two experiments were conducted with pigeons to examine the effects of procedures that varied information transmission on observing responses. The basic procedure for Experiment I was one in which a trial terminated in either non-contingent reinforcement or timeout. Pecking during a trial produced either green (positive) or red (negative) keylights. If no pecking occurred no differential stimuli appeared. The probability of positive trials was either 0.25, 0.50, or 0.75. Observing response rates and relative frequencies of occurrence were highest when the probability of positive trials was 0.25 and lowest at 0.75. In Experiment II, a modified chain procedure was used in which responding produced either red or green lights. Reinforcement or timeout followed light onset by 15 sec. The correlation between the stimuli and the event at the end of the trial (reinforcement or timeout) was varied. Reinforcement followed green 100%, 90%, 70%, or 50% of the time that green occurred. Since the overall probability of reinforcement remained at 0.50, reinforcement followed red in either 0%, 10%, 30%, or 50% of the time that it occurred. The rate of responses that produced these stimuli varied as a function of the correlation. The greater the probability of reinforcement after green, the higher the response rate.
Journal of the experimental analysis of behavior, 1973 · doi:10.1901/jeab.1973.20-73