ABA Fundamentals

TOWARDS AN EMPIRICAL CALCULUS OF REINFORCEMENT VALUE.

VERHAVE (1963) · Journal of the experimental analysis of behavior 1963
★ The Verdict

You can turn "likes more" into seconds of extra work by raising the switch cost until choice locks.

✓ Read this if BCBAs who want a fast, number-based preference test for any client.
✗ Skip if Clinicians who only use standardized MSWO assessments and dislike extra setup.

01Research in Context

01

What this study did

VERHAVMOLLIVER (1963) sketched a lab method to put a number on how much an animal likes one schedule more than another.

The animal could hop between two levers that paid off on different ratio schedules.

A switch delay was slowly raised until the animal stayed on the better lever; that delay became the price tag of preference.

02

What they found

The paper gives no data; it only shows the math and the apparatus.

Still, it proves you can turn "likes more" into seconds of extra work, a first for behavior analysis.

03

How this fits with other research

Harper et al. (2021) took the same titration idea into a preschool room. They scaled the price of attention by asking kids to press a paddle more and more. Conversation stayed strong even when the ratio grew, showing the method works with children.

Bugallo et al. (2018) built a new VI schedule that keeps the chance of payoff flat for up to 2T seconds. This tweak answers T’s call for cleaner tools to measure value, updating the 1963 blueprint 55 years later.

Fox et al. (2001) showed sea lions learn equivalence classes faster when each class brings its own type of fish. The team used reinforcer type as the value dial, echoing T’s trick of letting the reinforcer carry the measurement.

04

Why it matters

You now have a quick way to rank reinforcers without guessing. Run a brief switching task, raise the switch cost until choice locks in, and you have a number you can compare across days or clients. Try it next time you wonder whether bubbles or iPad time packs more punch.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Pick two reinforcers, let the client switch between them, and add one second of response cost after each switch until one item wins three times in a row—record that delay as the value score.

02At a glance

Intervention
not applicable
Design
single case other
Population
not specified
Finding
not reported

03Original abstract

Only one of two keys reinforces the subject with food. This key can assume one of two colors, each associated with a different fixed-ratio schedule for obtaining reinforcement. The function of the second key is to permit the animal to switch from the long schedule to the short schedule. If the difference between the ratio schedules is large enough, a preference for the shorter schedule is demonstrable. A quantitative index of preference is obtained as follows: each time the animal switches to the shorter schedule, the number of pecks required to produce the next switch is increased. As the "ante" on the switching key increases, the effective difference between the two ratio schedules decreases. After each food reinforcement, when the bird is exposed to the choice-situation, it takes longer before the bird switches again. This is used to "titrate" the bird's preference. If it does not switch within x sec, the progressively increasing ratio schedule of the switching key is decreased. A specific value, in terms of a rather specific number of responses the bird settles at on the choice key, is obtained. This equilibrium is employed as a dependent variable. Several variables of which it is a function are explored.

Journal of the experimental analysis of behavior, 1963 · doi:10.1901/jeab.1963.6-525