Assessment & Research

Consistent visual analyses of intrasubject data.

Kahng et al. (2010) · Journal of applied behavior analysis 2010
★ The Verdict

Expert BCBAs agree strongly on simple AB graphs, but busy or steep charts need extra checks.

✓ Read this if BCBAs who review single-case graphs in team meetings or supervision.
✗ Skip if Practitioners who rely only on standardized tests and ignore graphed data.

01Research in Context

01

What this study did

Kahng et al. (2010) asked expert BCBAs to look at single-case graphs.

Each expert rated the same set of AB-design charts on their own.

The team then checked how often the experts picked the same answer.

02

What they found

Agreement was high.

Well-trained analysts usually said the same graph showed an effect or no effect.

Your eyes can be trusted if you have solid training.

03

How this fits with other research

Wolfe et al. (2016) seems to disagree.

They used multiple-baseline graphs and saw only weak agreement.

The clash is really about graph type: simple AB charts give clear pictures, while many-tier baselines add noise.

Wolfe et al. (2023) add more detail.

They found steep trends and big swings in the data make raters split.

So the high agreement in Kahng et al. (2010) likely came from calmer, simpler graphs.

Manolov et al. (2022) give you numbers to back your eyes.

Their Brinley plots quantify trend and level, offering a second line of evidence when visuals feel shaky.

04

Why it matters

You can feel safe using visual inspection for plain AB data if you are trained.

When the graph has many tiers, steep trends, or lots of bounce, pair your eyes with a metric like the conservative dual-criterion method or Brinley distance.

This mix keeps decisions consistent across team members and meetings.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Before your next team meeting, run the conservative dual-criterion on any graph that looks messy and compare results with your visual call.

02At a glance

Intervention
not applicable
Design
other
Finding
positive

03Original abstract

Visual inspection of single-case data is the primary method of interpretation of the effects of an independent variable on a dependent variable in applied behavior analysis. The purpose of the current study was to replicate and extend the results of DeProspero and Cohen (1979) by reexamining the consistency of visual analysis across raters. We recruited members of the board of editors and associate editors for the Journal of Applied Behavior Analysis to judge graphs on a 100-point scale of experimental control and by providing a dichotomous response (i.e., "yes" or "no" for experimental control). Results showed high interrater agreement across the three types of graphs, suggesting that visual inspection can lead to consistent interpretation of single-case data among well-trained raters.

Journal of applied behavior analysis, 2010 · doi:10.1901/jaba.2010.43-35