Practical implications of the matching law.
Run replacement behavior on a VI schedule with immediate reinforcers and it will outcompete problem behavior.
01Research in Context
What this study did
Pear et al. (1984) wrote a theory paper. They asked which schedule beats others in choice setups.
They used the matching law to compare VI, VR, and fixed schedules. No kids or rats were tested.
What they found
The math says VI schedules win. A behavior on VI will outmatch a behavior on any other schedule.
So if you put problem behavior on VR and replacement behavior on VI, the replacement should take over.
How this fits with other research
Kronfli et al. (2021) later showed parents can do this at home. After BST, moms used VI praise and child problem behavior dropped—real-world proof of the 1984 tip.
Attwood et al. (1988) adds a twist. They bunched all VI reinforcers at the end of the session and response rates crashed. Immediacy, not just VI, drives the advantage. The 1988 data refine the 1984 claim.
Glenn (1988) widens the picture to VR schedules. Under concurrent VR, animals still follow the matching law, but extreme ratios create exclusive preference—useful when you must run VR for some reason.
Why it matters
You now have a clear rule: put the desired behavior on VI and keep the reinforcers coming soon. The schedule beats VR, FR, or FI in head-to-head choice. Watch immediacy—don’t save all tokens for Friday. If you must use VR somewhere, stay near 1:1 ratios to avoid exclusive preference.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Flip your differential reinforcement to VI 30-s and deliver the first reinforcer within 5 s.
02At a glance
03Original abstract
Many problem situations in applied settings are best conceptualized as choice situations. In addition, applied behavior analysts create choice situations when they reinforce a competing response to decrease inappropriate behavior. When such situations are analyzed using the matching law, variable interval (VI) schedules of reinforcement prove to be a superior intervention strategy regardless of the nature of the schedule maintaining other, less appropriate behavior. This conclusion is robust in that VI schedule superiority is observed in situations in which choice behavior is highly biased or shows pronounced undermatching as well as those in which the matching law holds precisely. Our analysis demonstrates the potential practical value of mathematical descriptions of behavior.
Journal of applied behavior analysis, 1984 · doi:10.1901/jaba.1984.17-367