Using a Visual Structured Criterion for the Analysis of Alternating-Treatment Designs.
Use the free VSC checklist on every ATD graph that has five or more data points per line to cut false positives.
01Research in Context
What this study did
The authors built a checklist for reading alternating-treatment graphs. They ran computer-made graphs to see if the checklist caught real effects and ignored fake ones.
They only tested graphs that had at least five points in every condition.
What they found
The checklist kept false alarms low while still spotting true effects. It worked best when each condition had five or more data points.
How this fits with other research
Manolov (2019) looked at the same kind of graphs the same year. That team said ALIV plus a randomization test beats the checklist. The two papers seem to clash, but they tested different tools.
Weaver et al. (2019) also studied randomization tests for fast-switching designs. They found p-values line up with expert eyes when alpha is set at .05. Their work backs using stats alongside the checklist, not instead of it.
Lanovaz et al. (2017) built a similar safety net for AB graphs. They showed that at least three A points and five B points keep false positives down. The new checklist extends that idea to ATDs.
Why it matters
Next time you open an ATD graph, run the visual structured criterion first. It takes one minute and guards against seeing an effect that is not there. If you also randomize conditions, add a randomization test for extra proof.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Print the VSC checklist and score this week’s ATD graph before you write the summary.
02At a glance
03Original abstract
Although visual inspection remains common in the analysis of single-case designs, the lack of agreement between raters is an issue that may seriously compromise its validity. Thus, the purpose of our study was to develop and examine the properties of a simple structured criterion to supplement the visual analysis of alternating-treatment designs. To this end, we generated simulated data sets with varying number of points, number of conditions, effect sizes, and autocorrelations, and then measured Type I error rates and power produced by the visual structured criterion (VSC) and permutation analyses. We also validated the results for Type I error rates using nonsimulated data. Overall, our results indicate that using the VSC as a supplement for the analysis of systematically alternating-treatment designs with at least five points per condition generally provides adequate control over Type I error rates and sufficient power to detect most behavior changes.
Behavior modification, 2019 · doi:10.1177/0145445517739278