ABA Fundamentals

Integrated delays to shock as negative reinforcement.

Lewis et al. (1976) · Journal of the experimental analysis of behavior 1976
★ The Verdict

Delaying an aversive event can reinforce behavior even when the total number of aversives stays the same.

✓ Read this if BCBAs who use escape or avoidance procedures in clinics or classrooms.
✗ Skip if Practitioners who work only with positive-reinforcement programs and never use breaks or delay.

01Research in Context

01

What this study did

The team placed rats in a box with a lever.

Each lever press delayed the next electric shock.

Total shocks stayed the same; only timing changed.

They asked: will rats still learn to press if shocks are not reduced?

02

What they found

The rats quickly learned to press.

Delay alone kept the behavior strong.

This showed that postponing pain can reinforce, even when pain count stays flat.

03

How this fits with other research

SIDMAN (1962) said shock-frequency reduction is required for avoidance.

Lewis et al. (1976) overturn that idea; delay alone works.

Gardner et al. (1976) ran a near-copy study the same year and got the same result, giving direct replication.

Wheatley et al. (1978) later showed pigeons match their responses to relative shock-free time, extending the principle to choice.

04

Why it matters

If you use escape or avoidance in treatment, focus on the delay, not just cutting total demands.

For example, let a brief break push the next hard task back a minute; the break itself can reinforce even if task count stays the same.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Next time a learner asks for a break, give a short timed delay to the next demand and track if requests drop; you are testing delay as reinforcer.

02At a glance

Intervention
other
Design
single case other
Finding
positive

03Original abstract

Rats were shocked at the rate of two per minute until they pressed a lever. In Experiment I, shocks were delivered at variable-time intervals averaging 30 sec; in Experiment II, shocks were delivered at fixed-time intervals of 30 sec. A response produced an alternate condition for a fixed-time period. The shock frequency following a response, calculated over the whole alternate condition, was two per minute. The pattern of shocks in the alternate condition was controlled so that the first shock occurred at the same time as it would have occurred had the response not been emitted; the remaining shocks were delayed until near the end of the alternate condition. Bar pressing was acquired in both experiments. This finding is not explained by two-factor theories of avoidance and is inconsistent with the notion that overall shock-frequency reduction is necessary for negative reinforcement. The data imply that responding is determined by the integrated delays to each shock following a response versus the integrated delays to shock in the absence of a response.

Journal of the experimental analysis of behavior, 1976 · doi:10.1901/jeab.1976.26-379