Human sensitivity to reinforcement: A comment on Kollins, Newland, and Critchfield's (1997) quantitative literature review.
Tighter lab controls and weak reinforcers can make humans look insensitive to contingencies—loosen the controls and sensitivity shows.
01Research in Context
What this study did
Feldman et al. (1999) wrote a short critique. They looked at a big review that said humans react less to rewards than animals do.
The authors said, "Wait—maybe the lab setup, not the species, causes the gap." They listed ways tight controls can hide human learning.
What they found
The team found no new data. They showed that extra rules, small rewards, and dull tasks make people look stubborn.
When studies remove social hints and use tiny pay, human scores drop. The paper says this is a design flaw, not a brain flaw.
How this fits with other research
Buskist et al. (1988) lab-lore paper backs the claim. It gives check-lists for cleaner human operant work, matching the call for tighter, yet fair, methods.
Sarber et al. (1983) proved college kids do chase stimuli linked to rewards. Their positive data support the idea that humans can show strong conditioned reinforcement when the set-up is clear.
Dougherty et al. (1994) show social instructions swing choices even when odds stink. This fits the critique: words, not just pay-offs, steer humans—another reason cross-species comparisons can mislead.
Why it matters
Before you label a client "unmotivated," check your program, not the learner. Strip needless rules, pick meaningful rewards, and add salient cues. Small design tweaks can uncover the learner’s true sensitivity to reinforcement, saving you from needless program changes.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Add a highly preferred item and a clear cue next to the target response, then watch if rates rise before you increase demand.
02At a glance
03Original abstract
In a quantitative review of human operant experiments, Kollins, Newland, and Critchfield (1997) found that humans are less sensitive to reinforcement contingencies than nonhumans are. Human performances were not as consistent with the matching law, and they were more variable from subject to subject. Some of the variables correlated with reduced human sensitivity were surprising. These included collection of the data under more controlled conditions (laboratory rather than naturalistic settings), and inclusions of discriminative stimuli correlated with alternative sources of reinforcement. We discuss these unexpected findings in the light of criticisms that have been leveled against meta-analytic literature reviews (e.g., the wisdom of grouping studies with widely diverse methods), and we suggest ways of improving future analyses of the behavior-analytic literature.
The Behavior analyst, 1999 · doi:10.1007/BF03391976