Systematic Protocols for the Visual Analysis of Single-Case Research Data
A simple five-step checklist turns visual inspection into a number you can write in the session note.
01Research in Context
What this study did
Wolfe et al. (2019) wrote a how-to paper for BCBAs. They give step-by-step rules for looking at A-B-A-B or multiple-baseline graphs. You follow the steps and get a number that tells you if the intervention worked.
The authors tested the rules on a few graphs in a small pilot. They did not collect new behavior data.
What they found
The protocol gives a clear score for level, trend, and overlap. The score helps you decide yes or no: an effect is there or it is not.
The pilot showed the rules can be used quickly, but the paper does not report hit rates or reliability numbers.
How this fits with other research
Manolov et al. (2017) came first and showed free R tools that draw graphs and run stats. Wolfe et al. (2019) add a paper checklist for people who prefer to eyeball first.
Ruiz et al. (2025) now supersedes both: their RDARBS R package draws the graph and scores it in under a minute. You can still use Wolfe’s rules if you want to double-check by hand.
Kril et al. (2022) tested a short decision algorithm with students and found it boosted accuracy. Their classroom data extend Wolfe’s idea from paper to real training.
Why it matters
If you run single-case sessions and hate statistical software, print the Wolfe checklist. Work through the five questions while you look at today’s graph. You will leave supervision with a number you can defend in your notes or to parents.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Tape the Wolfe checklist next to your desk and score today’s graph before writing your summary.
02At a glance
03Original abstract
Researchers in applied behavior analysis and related fields such as special education and school psychology use single-case designs to evaluate causal relations between variables and to evaluate the effectiveness of interventions. Visual analysis is the primary method by which single-case research data are analyzed; however, research suggests that visual analysis may be unreliable. In the absence of specific guidelines to operationalize the process of visual analysis, it is likely to be influenced by idiosyncratic factors and individual variability. To address this gap, we developed systematic, responsive protocols for the visual analysis of A-B-A-B and multiple-baseline designs. The protocols guide the analyst through the process of visual analysis and synthesize responses into a numeric score. In this paper, we describe the content of the protocols, illustrate their application to 2 graphs, and describe a small-scale evaluation study. We also describe considerations and future directions for the development and evaluation of the protocols.
Behavior Analysis in Practice, 2019 · doi:10.1007/s40617-019-00336-7