Assessment & Research

Agreement between visual inspection and objective analysis methods: A replication and extension

Taylor et al. (2022) · Journal of Applied Behavior Analysis 2022
★ The Verdict

Computer checks match visual inspection on four of five AB graphs, giving you a quick safety net.

✓ Read this if BCBAs who review single-case graphs for treatment decisions or publication.
✗ Skip if Practitioners who only run group designs or never look at graphs.

01Research in Context

01

What this study did

Taylor et al. (2022) ran 198 AB graphs through two computer tools and through expert eyes.

The tools were the conservative dual-criteria method and a machine-learning classifier.

They wanted to know how often the computer answer matched the human one.

02

What they found

The algorithms agreed with visual inspection on about 84–85% of the graphs.

That is roughly four out of every five charts you check.

Both tools gave similar hit rates, so either one works as a backup.

03

How this fits with other research

Lancioni et al. (2008) warned that visual inspection is shaky—43 raters rarely picked the same answer.

Taylor’s 2022 data now give you an easy fix: let a computer double-check the call.

Ford et al. (2020) showed extra story lines do not help raters agree; Taylor shows code can help even when eyes still disagree.

Bell (1999) asked for stats instead of eye rules; this paper answers with ready-made algorithms you can run today.

04

Why it matters

You can keep your clinical judgment and still guard against a bad call. Run the conservative dual-criteria or the free classifier on your next AB graph. If the computer flags an effect your eyes missed, probe the data again before you write the report. It takes one minute and gives you an 84% safety net.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Paste your next AB graph into the free conservative dual-criteria calculator and compare its call to yours.

02At a glance

Intervention
not applicable
Design
other
Population
not specified
Finding
positive
Magnitude
medium

03Original abstract

Behavior analysts typically rely on visual inspection of single‐case experimental designs to make treatment decisions. However, visual inspection is subjective, which has led to the development of supplemental objective methods such as the conservative dual‐criteria method. To replicate and extend a study conducted by Wolfe et al. (2018) on the topic, we examined agreement between the visual inspection of five raters, the conservative dual‐criteria method, and a machine‐learning algorithm (i.e., the support vector classifier) on 198 AB graphs extracted from clinical data. The results indicated that average agreement between the 3 methods was generally consistent. Mean interrater agreement was 84%, whereas raters agreed with the conservative dual‐criteria method and the support vector classifier on 84% and 85% of graphs, respectively. Our results indicate that both objective methods produce results consistent with visual inspection, which may support their future use.

Journal of Applied Behavior Analysis, 2022 · doi:10.1002/jaba.921