Interval sampling methods and measurement error: a computer simulation.
Use Oliver’s error tables to pick the interval method that adds the least noise to your data.
01Research in Context
What this study did
Wirth et al. (2014) built a computer model that pretends to be a kid.
The model acts out short bursts of behavior and long quiet spells.
They ran thousands of pretend sessions to see how close momentary time sampling, partial-interval, and whole-interval recording come to the true counts.
What they found
Each method missed the mark in its own way.
The team turned the misses into easy look-up tables.
Pick your session length, behavior rate, and interval size, and the table tells you which method adds the least error.
How this fits with other research
Virues-Ortega et al. (2022) later watched real college kids code behavior on paper or with two apps. Paper slightly beat the tech, but all three landed in the same high-accuracy range. Their live test backs up Oliver’s warning: the tool matters less than picking the right interval rule.
Chou et al. (2010) showed that feedback and small payoffs can sway what an observer writes down. Oliver’s tables help you cut the measurement noise first, so any leftover bias is easier to spot.
Alsop (2004) used the same Monte-Carlo trick to show that zero-error data sets fool standard stats. Both papers shout the same message: check your method before you trust the number.
Why it matters
Next time you plan 10-second partial-interval checks because “that’s what we always do,” pause. Look at Oliver’s tables first. You may see that momentary time sampling gives you cleaner data for the same effort. Less error means clearer graphs, faster decisions, and fewer program changes that weren’t needed.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Print the error table, circle your usual session length and behavior rate, and switch to the method with the smallest listed error.
02At a glance
03Original abstract
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.
Journal of applied behavior analysis, 2014 · doi:10.1111/j.1439-0310.2008.01544.x