Signal-detection analyses of conditional discrimination and delayed matching-to-sample performance.
Zero-error datasets can trick your stats—use bias-free estimators, not textbook quick fixes.
01Research in Context
What this study did
Alsop (2004) ran Monte Carlo simulations. He tested what happens when you apply standard signal-detection fixes to data with zero errors.
The study looked at conditional-discrimination and delayed-matching-to-sample tasks. These are common lab models for memory and concept learning.
What they found
The usual fixes for perfect scores put strong bias into the numbers. They can make you think a learner has good sensitivity when the data are actually empty.
In short, zero-error sheets can fool you into false conclusions.
How this fits with other research
Chou et al. (2010) extend the warning. They showed that live observers also drift when feedback or payoffs change. Brent flagged math bias; C et al. showed human bias.
Wirth et al. (2014) used the same Monte Carlo trick. They built error tables for interval sampling. Both papers shout the same message: routine data rules can distort.
Kangas et al. (2011) seem to clash. They praise 0 % commission errors in DTT, yet Brent says 0 % errors break the stats. The gap is level of analysis. D et al. tracked learner errors; Brent tracked the math after the learner is perfect. You need both views.
Why it matters
If you run conditional-discrimination probes or DMTS memory tasks, never trust a perfect score at face value. Report raw hits and false alarms. Add a tiny constant or use a bias-free estimator. Share the method so other BCBAs can judge the numbers. Clean data today keeps your conclusions safe tomorrow.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Switch from the old 0.5 correction to a log-linear estimator when you hit perfect scores in your next matching-to-sample probe.
02At a glance
03Original abstract
Quantitative analyses of stimulus control and reinforcer control in conditional discriminations and delayed matching-to-sample procedures often encounter a problem; it is not clear how to analyze data when subjects have not made errors. The present article examines two common methods for overcoming this problem. Monte Carlo simulations of performance demonstrated that both methods introduced systematic deviations into the results, and that there were genuine risks of drawing misleading conclusions concerning behavioral models of signal detection and animal short-term memory.
Journal of the experimental analysis of behavior, 2004 · doi:10.1901/jeab.2004.82-57