Effects of ratio reinforcement schedules on discrimination performance by Japanese monkeys.
Switch from continuous to variable-ratio reinforcement after acquisition to lock in accurate performance.
01Research in Context
What this study did
Fujita (1985) worked with Japanese monkeys on a color-discrimination task.
The monkeys had to pick the correct color square to earn food.
The study compared three reinforcement schedules: every response (CRF), every fifth response (FR-5), and random fifth response (VR-5).
What they found
Monkeys learned the task fastest when every correct response earned food.
After learning, monkeys made fewer mistakes when they had to work for food on FR or VR schedules.
VR kept accuracy most stable across long sessions.
How this fits with other research
Van Houten et al. (1980) saw the same VR edge in deaf students doing math.
VR tokens cut classroom disruption and lifted attention more than FR tokens.
de Carvalho et al. (2018) later repeated the pattern with rats.
VR-10 kept two rats pulling a lever together more steadily than FR-10.
Together these studies show VR beats FR for keeping good performance after skills are learned, across species and settings.
Why it matters
Start new skills with continuous reinforcement to build the behavior quickly.
Once the learner hits mastery, switch to VR for maintenance.
VR keeps responding strong without the long pauses FR can create.
Try VR 3-5 in maintenance phases for cleaner, steadier work from clients.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →After a learner masters a discrimination task on CRF, move to VR-3 and track if errors stay low.
02At a glance
03Original abstract
In Experiment 1, Japanese monkeys were trained on three conditional position-discrimination problems with colors as the conditional cues. Within each session, each problem was presented for two blocks of ten reinforcements; correct responses were reinforced under continuous-reinforcement, fixed-ratio 5, and variable-ratio 5 schedules, each assigned to one of the three problems. The assignment of schedules to problems was rotated a total of three times (15 sessions per assignment) after 30 sessions of acquisition training. Accuracy of discrimination increased to a moderate level with fewer trials under CRF than under ratio schedules. In contrast, the two ratio schedules, fixed and variable, were more effective in maintaining accurate discrimination than was CRF. With further training, as asymptotes were reached, accuracy was less affected by the schedule differences. These results demonstrated an interaction between the effects of reinforcement schedules and the level of acquisition. In Experiment 2, ratio sizes were gradually increased to 30. Discrimination accuracy was maintained until the ratio reached 20; ratio 30 strained the performance. Under FR conditions, accuracy increased as correct choice responses cumulated after reinforcement.
Journal of the experimental analysis of behavior, 1985 · doi:10.1901/jeab.1985.43-225