Preference for signalled reinforcement.
A quick cue before any reinforcer makes the reinforcer stronger and keeps the learner engaged.
01Research in Context
What this study did
Shimp et al. (1974) worked with pigeons in a small chamber.
Two keys sat side by side. One key gave grain on a variable-interval schedule and a red light always came on two seconds before each grain delivery.
The other key gave grain on the same schedule but with no warning light.
The bird could hop to either key at any time.
What they found
Every pigeon spent more than ninety percent of the session on the key with the red light.
When the researchers swapped the lights, the birds swapped with them.
The birds were not just choosing grain; they were choosing the signal that grain was coming.
How this fits with other research
Liberman et al. (1973) saw the same flip one year earlier, but with shock. Rats picked a lever that turned on a tone before shock instead of a lever that gave no warning, even when the tone meant eight times more shocks.
Together the two papers show that signals are wanted for both good and bad events.
Smith et al. (2010) moved the idea into the clinic. They gave non-contingent reinforcement to reduce problem behavior. When each delivery was preceded by a brief beep, the behavior dropped twice as fast.
The pigeon lab result now has a real-world speed-up tool.
Why it matters
You already deliver reinforcers. Add a two-second cue—a click, a word, a flash—right before each one. The cue itself becomes a mini-reinforcer and the learner stays with you longer. It costs nothing and may cut problem behavior faster, as E et al. showed. Try it in your next DRL or NCR session.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Put a two-second auditory or visual cue in front of every edible or token you hand out; watch stay-on-task time rise.
02At a glance
03Original abstract
Key pecking was reinforced on a two-component multiple schedule. A variable-interval schedule controlled reinforcement in both components. During one component, access to reinforcement was preceded by a tone; in the other component, a standard unsignalled schedule was in effect. After performance stabilized, subjects were given a choice between the signalled and unsignalled schedules. They were placed in the chamber with the unsignalled schedule in effect on the right key. A single response on the left, or changeover, key produced the signalled schedule for 1 min. Both pigeons in Experiment I pecked the changeover key at a rate sufficient to remain under the signalled schedule for over 90% of the session. Removing and reintroducing the tone demonstrated that the changeover-key responses were due to the occurrence of the tone. In Experiment II, when pecking the changeover key produced the unsignalled schedule, pecking the changeover key declined. The results may be explained either in terms of Hendry's information hypothesis or as escape from an intermittent positive reinforcement schedule.
Journal of the experimental analysis of behavior, 1974 · doi:10.1901/jeab.1974.22-143