Individualized sampling parameters for behavioral observations: enhancing the predictive validity of competing stimulus assessments.
Let each client set their own observation length in competing-stimulus assessments—fixed timers miss the best items.
01Research in Context
What this study did
DeLeon et al. (2005) tested a new way to run competing-stimulus assessments.
They let two children with developmental delay keep each item until problem behavior stopped.
Then they compared these custom times against the usual fixed five-minute samples.
What they found
The custom-length samples picked the items that later cut problem behavior best.
The fixed five-minute samples picked weaker items for both kids.
How this fits with other research
Carey et al. (2014) saw the same risk in skill graphs.
They showed that grabbing only the first few trials hides true mastery.
Together the papers warn: rigid time windows waste data.
Webb et al. (1999) found a matching win in drug work.
Cramming four single-dose sessions into one long session kept stimulus control and saved days.
All three studies say the same thing: tailor the sampling window and you keep accuracy while you save time.
Why it matters
Next time you run a competing-stimulus assessment, watch instead of clock-watch.
Let the client leave the item only when calm.
Those natural end points, not a timer, will steer you to the items that really soothe problem behavior.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Run the next competing-stimulus session without a preset stopwatch—end the trial when the client stays calm for one full minute.
02At a glance
03Original abstract
Recent studies have used pretreatment analyses, termed competing stimulus assessments, to identify items that most effectively displace the aberrant behavior of individuals with developmental disabilities. In most studies, there appeared to have been no systematic basis for selecting the sampling period (ranging from 30 s to 10 min) in which items were assessed. Unfortunately, estimates based on brief samples of behavior do not always predict the extent to which items will displace aberrant behavior over longer periods. This study first examined a method for determining an accurate individualized sample length for competing stimulus assessments, based on statistical measures of correspondence with extended effects, using a small number of items. The effects of a larger number of items were then assessed using the determined sample length. Finally, the method was validated by comparing its predictions, in terms of the reduction of problem behavior over more extended periods, to predictions based on sample durations typically used in previous investigations. For two participants, predictions based on individualized determination of sample lengths were more accurate than predictions based on typical sample lengths. These results are discussed in terms of the exchange between expediency and accuracy during competing stimulus assessments.
Research in developmental disabilities, 2005 · doi:10.1016/j.ridd.2004.09.004