The generalization‐across‐dimensions model applied to conditional temporal discrimination
Reinforcer location can secretly control temporal choices, so keep food placement constant when teaching duration-based discriminations.
01Research in Context
What this study did
Davison et al. (2024) tested pigeons in a matching-to-sample task where the sample stayed on for either 2 s or 8 s. The birds had to pick the comparison key that matched the sample duration to earn food.
The team tweaked the generalization-across-dimensions model so it could handle this time-based choice. They ran two lab experiments and checked how well the new equations predicted which key the birds would peck.
What they found
The updated model fit the data from each experiment almost perfectly. It tracked how often birds chose the short or long key across different food-payoff ratios.
But when they took the numbers from Experiment 1 and tried to forecast Experiment 2, the fit fell apart. The reason: small, unseen differences in where food was delivered changed the birds’ timing behavior.
How this fits with other research
The result lines up with Campbell (2003), who showed that extreme payoff ratios make pigeons ignore the sample and lock onto position. Both papers warn that reinforcer location can quietly warp matching performance.
It also echoes Collier et al. (1986): when stimulus location drifts, some discriminations break down while others survive. Davison’s team now adds ‘timing’ to the list of features that can collapse if reinforcer placement shifts.
Finally, the failure to predict across setups looks like the context effects seen in Ferrari et al. (1991). Just as pigeons treat the same odor cue differently in different chambers, they treat the same time cue differently when food moves.
Why it matters
If you teach conditional discriminations—whether colors, shapes, or durations—lock down reinforcer location early and keep it fixed across sessions. A tiny move of the treat bag or the token dispenser can undo the stimulus control you just built. Before you brag that a kid has ‘mastered’ a temporal discrimination, run a quick probe with the reward in a new spot; if accuracy drops, you’ve found hidden control by place, not time.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Tape a small marker on the table where you hand the edible; do not vary that spot until the skill is mastered.
02At a glance
03Original abstract
Can simple choice conditional-discrimination choice be accounted for by recent quantitative models of combined stimulus and reinforcer control? In Experiment 1, two sets of five blackout durations, one using shorter intervals and one using longer intervals, conditionally signaled which subsequent choice response might provide food. In seven conditions, the distribution of blackout durations across the sets was varied. An updated version of the generalization-across-dimensions model nicely described the way that choice changed across durations. In Experiment 2, just two blackout durations acted as the conditional stimuli and the durations were varied over 10 conditions. The parameters of the model obtained in Experiment 1 failed adequately to predict choice in Experiment 2, but the model again fitted the data nicely. The failure to predict the Experiment 2 data from the Experiment 1 parameters occurred because in Experiment 1 differential control by reinforcer locations progressively decreased with blackout durations, whereas in Experiment 2 this control remained constant. These experiments extend the ability of the model to describe data from procedures based on concurrent schedules in which reinforcer ratios reverse at fixed times to those from conditional-discrimination procedures. Further research is needed to understand why control by reinforcer location differed between the two experiments.
Journal of the Experimental Analysis of Behavior, 2024 · doi:10.1002/jeab.914