Effects of reinforcement scheduling on simultaneous discrimination performance.
Reinforcement rate changes how much you get, not how well you get it right.
01Research in Context
What this study did
Pigeons pecked two keys to earn grain. The keys lit up with colors. The birds had to pick the correct color to get food. The team changed how often grain followed a correct peck. They kept score of how many pecks each bird made and how many were right.
What they found
When grain came faster, the birds pecked more. When grain came slower, they pecked less. Yet the percent of correct pecks stayed the same. The schedule controlled effort, not accuracy. The birds still knew which color was correct.
How this fits with other research
Todorov et al. (1984) ran a similar pigeon test. They swapped grain size instead of grain timing. Again, frequency beat size in driving pecks. The 1967 null-accuracy result held.
Thomas (1974) moved the setup to college students. People did not follow the matching law like pigeons. Humans picked the rich side 15–20% less than the math said. Same schedule rules, different species, different outcome.
Farrant et al. (1998) worked with rats learning a new lever. Fast grain every 15 s sparked more levers than constant grain. The 1967 idea extends to acquisition: lean but steady beats rich but crowded.
Why it matters
You can change how hard a client works without hurting accuracy. Thin the schedule to boost persistence. Keep the SD the same to protect stimulus control. Check for species or phase differences before you port animal data to your case.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Try a fixed-interval 30-s schedule for mastered tasks to keep fluency up while saving reinforcers.
02At a glance
03Original abstract
Pigeons were trained on a discrete-trials, simultaneous discrimination procedure, with confusable stimuli such that asymptotic performance was about 85% correct. Trials were terminated if no response occurred within 2 sec of stimulus onset, so that probability of responding was free to vary. The schedule of reinforcement for correct responses was varied, with the following results: (1) there was no relation between frequency of reinforcement and accuracy of responding. (2) In extinction, the probability of responding fell to low levels, but accuracy remained roughly constant. (3) When reinforcement was available after a fixed number of trials or after a fixed number of correct responses, the probability of responding increased with successive trials after reinforcement, but accuracy was generally constant. (4) When every fifth correct response was reinforced, accuracy decreased immediately after reinforcement if the birds were required to respond on every trial.
Journal of the experimental analysis of behavior, 1967 · doi:10.1901/jeab.1967.10-251