Assessment & Research

Visual aids and structured criteria for improving visual inspection and interpretation of single-case designs.

Fisher et al. (2003) · Journal of applied behavior analysis 2003
★ The Verdict

Use the conservative dual-criteria (CDC) method instead of split-middle—it keeps false positives low and a 15-min deck trains staff to 95% accuracy.

✓ Read this if BCBAs who review single-case graphs in clinic or school settings.
✗ Skip if Practitioners who already use statistical effect-size software for every graph.

01Research in Context

01

What this study did

The team tested four ways to read single-case graphs. They compared two old tools—split-middle line and two stats tests—with two newer visual aids.

Staff first judged graphs with no help. Then they got a 15-minute slide deck and tried again. The goal was to see which method kept errors low and agreement high.

02

What they found

The conservative dual-criteria (CDC) method won. It cut false alarms and pushed staff accuracy from about 60% to over 90%.

Split-middle lines looked easy but let too many false positives slip through. A short training deck was enough to lock in the better skill.

03

How this fits with other research

Stephenson et al. (2015) meta-analysis found only 76% agreement when teams eye-ball graphs. Fisher et al. (2003) shows one fix: add CDC rules and a brief training to jump past that 76% mark.

Bergmann et al. (2023) compared ways to score procedural fidelity and also found that simple global ratings miss errors. Both papers push the same lesson: loose rules feel faster but hide mistakes.

Jessel et al. (2020) proved you can shorten FA sessions to 3 min and keep control. Likewise, W et al. trimmed inspection time by giving clear cut-offs instead of long debates. Efficiency works when the rules are tight.

04

Why it matters

You inspect graphs every week. Swap split-middle for CDC and your team will agree faster and err less. One short slide deck is all the training you need—no stats software, no extra hours. Better decisions, cleaner data, quicker changes to intervention.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Open your last five graphs, redraw the CDC lines (mean plus trend), and check if your original call still holds.

02At a glance

Intervention
not applicable
Design
other
Sample size
92
Population
not specified
Finding
positive
Magnitude
large

03Original abstract

Because behavior analysis is a data-driven process, a critical skill for behavior analysts is accurate visual inspection and interpretation of single-case data. Study 1 was a basic study in which we increased the accuracy of visual inspection methods for A-B designs through two refinements of the split-middle (SM) method called the dual-criteria (DC) and conservative dual-criteria (CDC) methods. The accuracy of these visual inspection methods was compared with one another and with two statistical methods (Allison & Gorman, 1993; Gottman, 1981) using a computer-simulated Monte Carlo study. Results indicated that the DC and CDC methods controlled Type I error rates much better than the SM method and had considerably higher power (to detect real treatment effects) than the two statistical methods. In Study 2, brief verbal and written instructions with modeling were used to train 5 staff members to use the DC method, and in Study 3, these training methods were incorporated into a slide presentation and were used to rapidly (i.e., 15 min) train a large group of individuals (N = 87). Interpretation accuracy increased from a baseline mean of 55% to a treatment mean of 94% in Study 2 and from a baseline mean of 71% to a treatment mean of 95% in Study 3. Thus, Study 1 answered basic questions about the accuracy of several methods of interpreting A-B designs; Study 2 showed how that information could be used to increase the accuracy of human visual inspectors; and Study 3 showed how the training procedures from Study 2 could be modified into a format that would facilitate rapid training of large groups of individuals to interpret single-case designs.

Journal of applied behavior analysis, 2003 · doi:10.1901/jaba.2003.36-387