This cluster shows how time rules the game when rewards come on a clock, not for every response. You’ll learn why pauses grow longer when the ratio gets bigger, how past reward speeds change today’s pace, and why extra free treats can flatten the scallop. These facts help a BCBA pick the right schedule so clients stay motivated without burning out.
Time-based reinforcement schedules deliver rewards on a clock, not for a specific response. Fixed-interval (FI) schedules give a reward for the first response after a set amount of time. This produces a distinctive pattern: slow responding right after a reward, speeding up as the interval ends. This is called the scallop, and research shows it appears in organisms as different as pigeons, rats, and the United States Congress.
Several studies in this cluster look at what disrupts or flattens the scallop. Free rewards given on a fixed-time schedule alongside a response-dependent schedule can suppress the target behavior — the learner gets reward regardless of what they do, which reduces responding. Variable-time schedules protect against this problem by spreading delivery unpredictably and making it less likely a reward falls at a moment that accidentally follows a problem behavior.
Research also shows that past reinforcement history shapes current responding even when the current schedule changes. Learners who experienced fast reinforcement rates before will tend to respond faster early in a new, slower schedule. This carry-over effect fades but can last several sessions. It means that a sudden change in schedule difficulty may temporarily produce better or worse performance than the new schedule deserves.
Practical findings include that learners prefer variable schedules when they have a higher chance of getting reinforced quickly, even if the average wait is longer. Some learners also prefer to work straight through and get their break at the end, rather than getting small breaks throughout. These preference differences matter for designing token economies, break schedules, and work-session structures.
Common questions from BCBAs and RBTs
The scallop is a pattern where responding is slow right after a reward and speeds up as time passes toward the next opportunity. It happens on fixed-interval schedules and is found in animals, children, and even legislative bodies. It matters because it tells you when a learner is likely to slow down — and that is often right after reinforcement, not right before.
When rewards are delivered on a fixed-time schedule regardless of behavior, the learner may stop working because they get the reward anyway. Research shows that adding free NCR alongside a response-dependent schedule can accidentally reduce the target behavior. Use variable-time schedules to minimize this problem, and monitor responding closely when you introduce any NCR.
A variable-time schedule delivers rewards at unpredictable intervals averaging a set duration. Research shows it protects better against problem behavior than fixed-time when staff sometimes miss deliveries. Use variable-time NCR when you need problem behavior reduction to hold up even if your staff cannot deliver rewards precisely on schedule.
Give it several sessions. Research shows that past reinforcement history carries over after a schedule change — a learner coming from a rich history may perform better than expected at first, or one coming from a lean history may do worse. Collect data across sessions before concluding the new schedule is too hard or too easy.
It depends on the learner. Research shows that some learners prefer to complete all work before a break, while others prefer small breaks throughout. Both patterns are real, and preferences vary by person. Assess which schedule the learner performs better and chooses when given the option, and design accordingly.