ABA Fundamentals

Discriminated response and incentive processes in operant conditioning: a two-factor model of stimulus control.

Weiss (1978) · Journal of the experimental analysis of behavior 1978
★ The Verdict

Reinforcement works by teaching clear if-then rules, not by mysteriously beefing up responses.

✓ Read this if BCBAs who write skill-acquisition or behavior-reduction plans in any setting.
✗ Skip if RBTs who only run written protocols without designing them.

01Research in Context

01

What this study did

Roper (1978) wrote a theory paper. It asked: what if reinforcement is not magic glue that sticks responses to rewards?

Instead, the paper says animals learn a simple rule: when I see this cue, that outcome will follow. The cue predicts the outcome.

The author called this a two-factor model. One factor is the learned prediction. The other is how that prediction guides action.

02

What they found

The paper found no new data. It re-read old data through a new lens.

Old results made more sense if you drop the idea that rewards "strengthen" behavior. They fit better if you say rewards teach what follows what.

In plain words: the child does not hit the button harder because candy made the hitting stronger. The child hits because past hits predicted candy.

03

How this fits with other research

Herrnstein et al. (1979) took the idea and gave it numbers. They wrote a rate equation that adds "reinforcer power" to the old rate rule. This turns the 1978 story into a math tool you can test.

Davison et al. (1984) later showed that matching on VI VR schedules can be faked by the way the schedule feeds back. This sounds like a fight, but it is not. The 1978 paper says matching should rest on clear predictions. The 1984 paper shows messy schedules hide those predictions. Both agree: clean cues matter.

Alba et al. (1972) ran birds on concurrent VI schedules. When change-over delays grew long, matching fell apart. This early clue fits the 1978 view: long delays blur the cue-outcome link.

04

Why it matters

Stop telling clients that rewards "make" behavior stronger. Tell them the reward teaches a rule: when you do X, Y happens. Then make that rule crystal clear. Use tight delays, clear cues, and steady schedules so the learner can a glance knows what comes next.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Pick one target behavior. Add a 0.5-second delay between the response and the reinforcer. Watch if clarity drops. Then tighten the delay and see the rule snap back into focus.

02At a glance

Intervention
not applicable
Design
theoretical
Finding
not reported

03Original abstract

Behavior analysis has often simultaneously depended upon and denied an implicit, hypothetical process of reinforcement as response strengthening. I discuss what I see as problematic about the use of such an implicit, possibly inaccurate, and likely unfalsifiable theory and describe issues to consider with respect to an alternative view without response strengthening. In my take on such an approach, important events (i.e., "reinforcers") provide a means to measure learning about predictive relations in the environment by modulating (i.e., inducing) performance dependent upon what is predicted and the relevant motivational mode or behavioral system active at that time (i.e., organismic state). Important events might be phylogenetically important, or they might acquire importance by being useful as signals for guiding an organism to where, when, or how currently relevant events might be obtained (or avoided). Given the role of learning predictive relations in such an approach, it is suggested that a potentially useful first step is to work toward formal descriptions of the structure of the predictive relations embodied in common facets of operant behavior (e.g., response-reinforcer contingencies, conditioned reinforcement, and stimulus control). Ultimately, the success of such an approach will depend upon how well it integrates formal characterizations of predictive relations (and how they are learned without response strengthening) and the relevant concomitant changes in organismic state across time. I also consider how thinking about the relevant processes in such a way might improve both our basic science and our technology of behavior.

Journal of the experimental analysis of behavior, 1978 · doi:10.1901/jeab.1978.30-361