Preference pulses without reinforcers.
Clean your preference pulse with a residual correction or you may credit a reinforcer that never happened.
01Research in Context
What this study did
Sawyer et al. (2014) built a computer model that fakes choice data.
The model shows preference pulses even when no reinforcer is delivered.
They offer a simple math fix to strip out the fake part of the pulse.
What they found
The pulse you see on your graph may be just a visit-length trick.
Subtract the artifact before you claim the reinforcer did the work.
How this fits with other research
Malone (1999) and Cameron et al. (1996) say local reinforcers drive stay-or-switch choices.
P et al. answer: "Not always—our model makes pulses without any."
Hachiga et al. (2015) kept the correction and still found a tiny real pulse.
They added a win-stay rule, showing some local control remains after you clean the data.
Why it matters
When you run a concurrent schedule, check your pulse before you call it reinforcement.
Run the residual correction in Excel first.
If the pulse survives the math, you can trust it and teach it.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →After you graph post-reinforcer choice, subtract the artifact the paper shows before you write "local reinforcement effect" in your note.
02At a glance
03Original abstract
Preference pulses are thought to represent strong, short-term effects of reinforcers on preference in concurrent schedules. However, the general shape of preference pulses is substantially determined by the distributions of responses-per-visit (visit lengths) for the two choice alternatives. In several series of simulations, we varied the means and standard deviations of distributions describing visits to two concurrently available response alternatives, arranged "reinforcers" according to concurrent variable-interval schedules, and found a range of different preference pulses. Because characteristics of these distributions describe global aspects of behavior, and the simulations assumed no local effects of reinforcement, these preference pulses derive from the visit structure alone. This strongly questions whether preference pulses should continue to be interpreted as representing local effects of reinforcement. We suggest an alternative approach whereby local effects are assessed by subtracting the artifactual part, which derives from visit structure, from the observed preference pulses. This yields "residual" preference pulses. We illustrate this method in application to published data from mixed dependent concurrent schedules, revealing evidence that the delivery of reinforcers had modest lengthening effects on the duration of the current visit, a conclusion that is quantitatively consistent with early research on short-term effects of reinforcement.
Journal of the experimental analysis of behavior, 2014 · doi:10.1002/jeab.84