Using dual‐criteria methods to supplement visual inspection: Replication and extension
Dual-criteria aids are safe only when Phase B is long and the baseline is steady; otherwise you risk cheering for a mirage.
01Research in Context
What this study did
Falligant’s team re-ran the math on dual-criteria aids. These are the simple rules you draw on a graph: two lines that say “if X points in Phase B land above the top line, call it an effect.”
They fed 1,000 real clinic graphs into a computer. They changed two things: how long Phase B was and what the baseline looked like.
What they found
Short Phase B plus a bouncing baseline cried wolf. False-positive rates shot up to a large share.
Longer Phase B (ten or more points) dropped the error rate below a large share. Stable baselines helped even more.
How this fits with other research
Carey et al. (2014) already showed that sampling only the first few trials tricks the eye into seeing mastery that is not there. Falligant extends that warning: shortcuts in the decision rule can also fool you.
Pichardo et al. (2026) asked, “Can parents collect data instead of us?” They found yes—caregiver numbers matched trained staff in 87-a large share of contrasts. Both papers are replications that test cheap, practical aids. One checks who collects the data; the other checks how we judge it.
Gilroy et al. (2017) gave us free software for delay-discounting curves. Falligant gives us free rules of thumb for visual curves. Same shelf, different tools.
Why it matters
Before you stamp “intervention works,” count the Phase B points. If you have fewer than ten and the baseline is rocky, park the dual-criteria lines and keep collecting. This one guardrail can save you from a false victory party with parents, teachers, and funders.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Open your last single-case graph; if Phase B has fewer than ten points, extend the phase before drawing any aid lines.
02At a glance
03Original abstract
The dual-criteria and conservative dual-criteria methods effectively supplement visual analysis with both simulated and published datasets. However, extant research evaluating the probability of observing false positive outcomes with published data may be affected by case selection bias and publication bias. Thus, the probability of obtaining false positive outcomes using these methods with data collected in the course of clinical care is unknown. We extracted baseline data from clinical datasets using a consecutive controlled case-series design and calculated the proportion of false positive outcomes for baseline phases of various lengths. Results replicated previous findings from Lanovaz, Huxley, and Dufour (2017), as the proportion of false positive outcomes generally decreased as the number of points in Phase B (but not Phase A) increased using both methods. Extending these findings, results also revealed differences in the rate of false positive outcomes across different types of baselines.
Journal of Applied Behavior Analysis, 2020 · doi:10.1002/jaba.665