ABA Fundamentals

A molar theory of reinforcement schedules.

Rachlin (1978) · Journal of the experimental analysis of behavior 1978
★ The Verdict

Molar equations predict choice on paper, but local contingencies and delay discounting can override the math.

✓ Read this if BCBAs who run concurrent or chained reinforcement schedules in clinic or lab.
✗ Skip if Practitioners who only use simple fixed-ratio or fixed-interval programs.

01Research in Context

01

What this study did

Rachlin (1978) wrote math equations. The equations predict how animals split time between two levers. Each lever pays off on a different schedule. The animal picks the mix that gives the most total reward.

The paper is pure theory. No rats or pigeons were run. The formulas link response time, reinforcer value, and maximization.

02

What they found

The equations show that behavior will settle where the payoff per minute is highest. If you know the schedule parameters, you can forecast the response split.

The model fits most concurrent-schedule data from the 1970s.

03

How this fits with other research

Aman et al. (1993) contradicts the story. Rats learned long lever runs that cut their overall payoff. Local contingencies, not molar rates, drove the behavior. The clash warns you to watch moment-to-moment cues, not just averages.

Pisacreta (1982) extends the idea. It adds chained schedules and writes new math for choice at each link. Attwood et al. (1988) also extends the model by showing that delayed reinforcers lose value. Their pigeons slowed down when all food came at the end of the session. The 1978 equations still work if you insert a time-discount factor.

Vaughan (1987) piles on more doubt. Pigeons sometimes pecked the key that paid less, even though they had just shown a preference for the richer one. Value and response strength can split apart, so maximization is incomplete.

04

Why it matters

When you set up concurrent schedules, start with the 1978 molar forecast, then check for local quirks. Drop in brief extinction probes or magnitude shifts to see if the client still follows the richer schedule. If not, tighten the contingency or add a prompt. The equations give a first guess, but your data always win.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Plot response rate on each alternative for 5 min, then insert a 30-s extinction probe on the richer side and watch if the client switches.

02At a glance

Intervention
not applicable
Design
theoretical
Finding
not reported

03Original abstract

Behavior of subjects exposed to concurrent and individual interval and ratio schedules of reinforcement may be described in terms of a set of expressions relating the value of responses to their durations, a feedback equation relating reinforcement to response duration, and the assumption that subjects allocate their time among various responses so as to maximize value.

Journal of the experimental analysis of behavior, 1978 · doi:10.1901/jeab.1978.30-345