Assessment & Research

Reliability and validity of using structured visual‐inspection criteria to interpret latency‐based functional analysis outcomes

Sunde et al. (2022) · Journal of Applied Behavior Analysis 2022
★ The Verdict

A one-page checklist lets any staff member read latency FA graphs with a large share agreement.

✓ Read this if BCBAs who run or supervise latency-based functional analyses in clinic or school.
✗ Skip if Practitioners who only use standard 10-min session FAs without latency data.

01Research in Context

01

What this study did

The team built a short checklist for reading latency-based functional analysis graphs.

They asked 43 BCBAs to use the list on real FA data. Then they checked if everyone picked the same function.

Raters scored a large share agreement and matched the original authors a large share of the time.

02

What they found

The checklist works. Two people looking at the same graph now say the same thing a large share of the time.

That is a big jump from the old days when experts often disagreed.

03

How this fits with other research

Al-Jawahiri et al. (2019) pooled 28 FCT studies that used latency FAs. The new checklist could re-score every one of those old graphs in minutes.

Wolfe et al. (2023) also built a visual tool, but for judging replication instead of function. Both papers show that clear pictures beat gut feelings.

McIntyre et al. (2002) taught parents to run FBAs. Now those same parents could use the short checklist to read the graphs they collect.

04

Why it matters

You no longer need ten years of experience to read a latency FA. Print the one-page checklist, hand it to a new RBT, and you will both call the same function. Faster teams mean quicker treatment and fewer sessions spent arguing over the data.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Tape the checklist to your clipboard and use it during your next latency FA review.

02At a glance

Intervention
not applicable
Design
methodology paper
Sample size
43
Population
not specified
Finding
strongly positive
Magnitude
large

03Original abstract

Prior research has evaluated the reliability and validity of structured visual inspection (SVI) criteria for interpreting functional analysis (FA) outcomes (Hagopian et al., 1997; Roane et al., 2013). We adapted these criteria to meet the unique needs of interpreting latency-based FA outcomes and examined the reliability and validity of applying SVI criteria to 43 previously published latency-based FA datasets. Overall, raters agreed on SVI-determined FA outcomes (98% of functions and 95% of cases) and these outcomes corresponded well to the interpretations provided by the authors of these 43 datasets (94% of functions and 88% of cases), indicating a high degree of reliability and concurrent validity. Our findings suggest that the use of SVI criteria may (a) serve as an objective aid in the identification of behavioral function(s), (b) produce high levels of agreement among expert raters, and (c) serve as a useful resource when teaching students how to interpret latency-based FA outcomes.

Journal of Applied Behavior Analysis, 2022 · doi:10.1002/jaba.926