Can a decay process explain the timing of conditioned responses?
Decay models cannot create the natural scalar spread in response times, so blame memory read-out instead.
01Research in Context
What this study did
The author built math models that use decay to explain when animals respond. He asked if these models can explain the scatter we see in real response times.
He used computer runs and equations, not live subjects. The goal was to test if a fading memory trace could create the typical spread in latencies.
What they found
Pure decay models failed. They could not copy the scalar property, the fact that spread grows with the target time.
The math showed that decay adds the wrong kind of noise. It makes timing either too exact or too wild, never the natural proportion we see in data.
How this fits with other research
Rapport et al. (1996) showed pigeons remember several recent intervals, not just the last one. Their memory-weighting idea fits the new view that read-out, not fading traces, sets variability.
Capaldi (1992) pushed nonlinear dynamics for behavior. Parmenter (1999) agrees that nonlinear tools help, but warns that simple decay is the wrong kind of nonlinearity.
Delano (2007) found hyperbolic decay in reinforcer value. This looks like a clash, yet E tested choice, not latency spread. Different measures let both papers be right.
Why it matters
When you graph latency data, expect the width to scale with the mean. Do not blame a fading cue or memory trace. Instead, look at how the animal samples its memory of the target time. Build teaching programs that give clear, stable reference times and check for scalar spread as a healthy sign, not error.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Plot the mean and SD of your learner's wait times; if the SD is about 1/4 of the mean, you are seeing normal scalar timing—no need to switch cues.
02At a glance
03Original abstract
To explain time‐scale invariant distributions of response latencies, it appears to be necessary to postulate scalar noise in the remembered intervals, against which the subjective measure of the currently elapsing interval is compared. At least in some cases, the observed variability cannot be due to variability in the subjective intervals written to memory; it must come from noise (variability) in the reading of a memory. The Staddon and Higa proposal offers no explanation for the observed variability, and it is unclear what noise assumption would yield the observed variability, given their assumption that intervals are timed by a nonlinear decay process. The decay process cannot plausibly be represented by the logarithmic function, because it begins and ends at infinity. The assumption of any form of nonlinear timing is inconsistent with the most important result of the time‐left experiment, which is that the changeover time increases linearly with the comparison‐standard difference.
Journal of the experimental analysis of behavior, 1999 · doi:10.1901/jeab.1999.71-264