Assessment & Research

Quantifying false positives in simulated events using partial interval recording and momentary time sampling with dual‐criteria methods

Falligant et al. (2020) · Behavioral Interventions 2020
★ The Verdict

DC and CDC keep false positives low with interval data, but we still need to test their power to catch real effects.

✓ Read this if BCBAs who use partial interval or momentary time sampling to judge interventions.
✗ Skip if Clinicians who already use full-session continuous recording and randomization tests.

01Research in Context

01

What this study did

Falligant et al. (2020) ran fake data sets to test two quick decision rules. The rules are called DC and CDC. They paired the rules with two cheap ways to watch behavior: partial interval recording and momentary time sampling.

The team wanted to know how often the rules cry “effect” when nothing really changed. They did not test how often the rules miss a real effect.

02

What they found

The DC and CDC methods kept false alarms low with both sampling styles. The study only looked at error in one direction: saying an intervention worked when it did not.

03

How this fits with other research

Lanovaz et al. (2017) used the same DC rules on real data sets. They found you need at least three A points and five B points to keep false positives tame. Falligant’s fake data backs up that rule of thumb.

Manolov (2019) also ran simulations, but on randomization tests for alternating treatments. Both papers say “run the numbers,” yet they test different tools. One checks quick visual rules; the other checks p-values.

Howard et al. (2019) compared visual picks to stats like IRD. They saw experts are stricter than numbers. Falligant fills the gap by showing how often quick rules agree with a “no change” truth.

04

Why it matters

You can trust DC and CDC to guard against false positives when you use interval or time-sampling data. Still, we do not know if these rules will spot a real change. Collect your minimum A and B points, then treat DC/CDC as a first gate, not the final word.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Count your baseline and treatment points; if you have fewer than 3 in A or 5 in B, keep collecting before you trust a DC/CDC call.

02At a glance

Intervention
not applicable
Design
other
Finding
not reported

03Original abstract

The dual‐criteria (DC) and conservative dual‐criteria (CDC) methods allow clinicians and researchers to quantify the occurrence of false‐positive outcomes within single‐case experimental designs. The purpose of the current study was to use these DC and CDC methods to measure the incidence of false positives with simulated data collected via discontinuous interval methods (i.e., momentary time sampling, partial‐interval recording) as a function of data series length. We generated event data to create 316,800 unique simulated data series for analysis. In Experiment 1, we evaluated how changes in relevant parameters (i.e., interval sizes, event durations, IRT‐to‐event‐run ratios) produced false positives with momentary time sampling procedures. We also assessed the degree that the CDC method produced fewer false positives than the DC method with simulated interval data. In Experiment 2, we used similar procedures to quantify the occurrence of false positives with partial‐interval recording data. We successfully replicated outcomes from previous research in the current study, though such results only highlight the generality of the procedures relating to false positive (and not false negative) outcomes. That is, these results indicate MTS and PIR may adequately control for false positives, but our conclusions are limited by a lack of data on power.

Behavioral Interventions, 2020 · doi:10.1002/bin.1707