ABA Fundamentals

Control of myoelectrical responses through reinforcement.

Laurenti-Lions et al. (1985) · Journal of the experimental analysis of behavior 1985
★ The Verdict

Tiny thumb-twitches can be learned with clear, immediate reinforcement, but shifting to vague or sliding payoffs cuts the rate in half.

✓ Read this if BCBAs teaching micro-movements or building new communicative responses in clinic or rehab settings.
✗ Skip if Clinicians focused only on large, already-established behaviors like toilet use or mealtime routines.

01Research in Context

01

What this study did

Fingeret et al. (1985) asked if a tiny thumb-twitch could be learned. The twitch is so small you can’t feel it. They taped a sensor to one adult’s thumb. Each twitch turned on a green light and added a nickel.

Later they swapped the payoff. Instead of a light and a nickel every time, the person earned points on a sliding scale. Fewer twitches gave fewer points. They kept a yoked control: a second person got the same payoffs, but not for twitching.

02

What they found

The thumb-twitch rate shot up when every twitch paid. It stayed high only while the payoff stayed certain. When the payoff turned into a sliding scale, the rate dropped by half.

The yoked control never changed. This proved the rise was not random or fatigue. Reinforcement, not chance, drove the twitch.

03

How this fits with other research

Badia et al. (1972) got the same fast gain with pigeons. Birds heard tones from left or right. Differential reinforcement gave food only for pecking the key on the correct side. Within two sessions accuracy topped a large share. Both studies show that clear, immediate consequences teach even tiny or new responses.

Horner (1971) seems to disagree. He gave rats food every 30 seconds no matter what they did. Their lever pressing fell. Non-contingent reinforcement hurt responding. L et al. used contingent reinforcement, so their twitch rose. The two papers fit once you see the contingency matters.

Hamilton et al. (1978) used brief lights as secondary reinforcers. Those lights kept rats pressing during long waits. L et al. used a green light the same way. Both labs show that even small signals can carry reinforcing power if paired with food.

04

Why it matters

You now know that micro-responses can be shaped. This opens the door to teaching subtle movements like eyebrow raises for communication or small muscle activations during rehab. Keep the payoff clear and immediate at first. Fade to thinner schedules only after the response is strong. If the rate drops, check whether your reinforcement still feels contingent and binary to the learner.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Start each new micro-response with a 1:1 payoff and a salient cue; switch to leaner schedules only after three stable sessions.

02At a glance

Intervention
other
Design
single case other
Population
neurotypical
Finding
positive
Magnitude
medium

03Original abstract

A classic experiment by Hefferline, Keenan, and Harford (1959) showed that small thumb-twitches, imperceptible to the subject, can be controlled by the consequences of terminating and/or postponing aversive noise. These findings were further investigated in three experiments reported here. Experiment 1 replicated the original study. Experiment 2 was a control study in which stimulus changes were presented as in Experiment 1, but independently of the responses. Under these conditions the response rate varied over a large range with no systematic relation to experimental events. The increments in response rate reported by Hefferline et al. were within the present range of variation, suggesting that conditioning in the earlier study may have reflected a consistency in the direction of change rather than an increase in rate beyond the baseline range. In the present experiment, however, the rate increase was absolute. In Experiment 3, analog rather than binary changes in stimulus conditions were used as reinforcement. Under these conditions, the rates of subjects whose responses were conditioned fell from 78% (in the previous experiment) to 31%.

Journal of the experimental analysis of behavior, 1985 · doi:10.1901/jeab.1985.44-185