Assessing preference for reinforcers using demand curves, work-rate functions, and expansion paths.
Reinforcer value moves—use demand curves or quick progressive-ratios to catch the shift early.
01Research in Context
What this study did
The paper builds a math map for reinforcer preference. It shows how demand curves, work-rate graphs, and expansion paths fit together.
The map is pure theory. No kids, no rats, no trials. Just equations and logic that predict when a reinforcer will gain or lose value.
What they found
Preference is not a fixed trait. The same reinforcer can rise or fall in rank when the schedule or the context changes.
Demand curves and expansion paths let you forecast these flips before they happen.
How this fits with other research
Kraijer (2000) gives the idea legs. Single-case data trace smooth demand curves that match the 1995 shapes.
Poling (2010) moves the tools to the clinic. It swaps the paper’s demand curves for easy progressive-ratio breaks you can run with a client.
Tung et al. (2017) seems to clash. Free-operant sampling cut problem behavior, yet the 1995 model never mentions behavior suppression. The gap is method: Tung looked at assessment side-effects, not economic value.
Why it matters
Stop treating your reinforcer list as permanent. Run a quick progressive-ratio next session and plot the break points. If the curve bends down hard, that “favorite” item is losing punch. Swap it before motivation drops and keep your teaching ratio high.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pick one learner, run a five-step progressive-ratio on each top reinforcer, and stop when response rate halves—your new potency rank is ready.
02At a glance
03Original abstract
A BEHAVIORAL ECONOMIC MODEL THAT EXPLAINS THE CHOICE AND ALLOCATION OF WORK RATE IS USED TO PREDICT PERFORMANCE PATTERNS IN THREE CONTEXTS: with single schedules, with concurrent schedules when total reinforcement is low, and with concurrent schedules when reinforcement increases. Performance in the three contexts is predicted to change in orderly ways depending on how the subject evaluates the reinforcers earned. Quadrant diagrams are used to generate reinforcer demand functions, work-rate supply functions, and reinforcement-rate expansion paths. Preference between reinforcers is viewed as being a variable, with preference reversing in some situations.
Journal of the experimental analysis of behavior, 1995 · doi:10.1901/jeab.1995.64-313