A Monte Carlo evaluation of masked visual analysis in response‐guided versus fixed‐criteria multiple‐baseline designs
Wait for stable baseline data and use masked judges to keep false positives under 5 % while still catching strong treatment effects.
01Research in Context
What this study did
Ferron and team ran 10,000 pretend multiple-baseline graphs on a computer. They tested two rules for deciding when to add the next baseline: wait until the data path is flat (response-guided) or always wait a set number of sessions (fixed-criteria).
Raters who did not know which graphs had real treatment effects judged the graphs while blind to condition. The goal was to see which rule kept false alarms under 5 % and still caught true effects 80 % of the time.
What they found
Masked visual analysis held false positives under .05 in every setup. It hit the 80 % truth target only when three things lined up: response-guided phase extension, large immediate jumps, and at least five baseline points per tier.
Fixed-length baselines often missed real effects unless the jump was huge. Waiting for flat baseline data before moving on gave the best mix of safety and power.
How this fits with other research
Fahmie et al. (2013) also tinkered with single-case rules, but they tested which control condition to use in a functional analysis. Both papers show that tiny design choices—what you wait for, what you compare—sway conclusions.
Titlestad et al. (2019) asked if VB-MAPP domain scores are reliable enough for goal setting. Their answer was “look at total scores, not single domains.” Ferron gives the same caution: single visual cues can fool you; use a stricter rule set.
Eussen et al. (2016) validated a new motor scale. Ferron’s work is the flip side: before you trust any scale’s data, make sure the visual rules you use to judge change are themselves valid.
Why it matters
Next time you run a multiple baseline, wait until each tier’s baseline is flat before starting treatment. Track at least five data points per tier. This simple habit keeps false positives low and still lets you spot strong interventions quickly.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Add a flat-baseline rule to your next multiple-baseline study and ask a co-worker who doesn’t know the hypothesis to eyeball the graphs.
02At a glance
03Original abstract
We developed masked visual analysis (MVA) as a structured complement to traditional visual analysis. The purpose of the present investigation was to compare the effects of computer-simulated MVA of a four-case multiple-baseline (MB) design in which the phase lengths are determined by an ongoing visual analysis (i.e., response-guided) versus those in which the phase lengths are established a priori (i.e., fixed criteria). We observed an acceptably low probability (less than .05) of false detection of treatment effects. The probability of correctly detecting a true effect frequently exceeded .80 and was higher when: (a) the masked visual analyst extended phases based on an ongoing visual analysis, (b) the effects were larger, (c) the effects were more immediate and abrupt, and (d) the effects of random and extraneous error factors were simpler. Our findings indicate that MVA is a valuable combined methodological and data-analysis tool for single-case intervention researchers.
Journal of Applied Behavior Analysis, 2017 · doi:10.1002/jaba.410