An exploration of the interrater agreement of visual analysis with and without context
Extra background info does not improve how reliably experts read single-case graphs—clear visuals do.
01Research in Context
What this study did
Ford et al. (2020) asked 60 behavior analysts to look at single-case graphs.
Half the group saw only the graph. The other half got the same graph plus a short note about the client, setting, and target behavior.
Everyone decided whether a functional relation was present. The authors then checked how often the two groups agreed with each other.
What they found
Extra context did not change agreement. Both groups reached the same yes-or-no decision on six of seven graphs.
Inter-rater agreement stayed high no matter which packet the raters received.
In short, the graph alone was enough for experts to reach a reliable decision.
How this fits with other research
Lancioni et al. (2008) once warned that visual inspection of FA graphs is shaky. Ford’s team shows the act itself can be reliable when the graph is clear; the earlier low numbers likely came from hard-to-read data, not from missing context.
Taylor et al. (2022) later found that computer rules matched visual inspection on about four out of five AB graphs. Together the two studies give you a simple recipe: trust your eyes for clear graphs, and keep an objective algorithm handy for the messy ones.
Stolz (1977) showed that old JABA papers rarely checked reliability more than once per condition. Ford’s work keeps that issue alive—agreement is only high when the display is clean, so keep collecting reliability every phase.
Why it matters
You can stop hunting for extra client details before you judge a graph. Focus on making the display clear instead: phase lines, equal axes, and steady trend. If the data path is messy, pair your visual call with a conservative dual-criteria or machine-learning check. This saves time and still protects decision quality.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Present your next SCED graph without the case story and see if your team still agrees—then teach them to use conservative dual-criteria when paths look messy.
02At a glance
03Original abstract
Visual analysis is integral to the analysis of single-case experimental design (SCED) data. Previous studies have shown that many factors may influence the interrater agreement (IRA) of visual analysis. One factor that has received little direct attention is the impact of contextual information. In the current study, authors of recently published SCED studies were asked to make judgments regarding functional relations based on published datasets that met criteria for design quality. Respondents were randomly assigned to view graphs with or without contextual information and the degree of interrater agreement was compared. Results revealed that contextual information had no impact on IRA for decisions of a functional relation. IRA was high across both groups for 6 of the 7 datasets examined. Implications and recommendations based on these results are discussed.
Journal of Applied Behavior Analysis, 2020 · doi:10.1002/jaba.560