Assessment & Research

Predictive validity and efficiency of ongoing visual‐inspection criteria for interpreting functional analyses

Saini et al. (2018) · Journal of Applied Behavior Analysis 2018
★ The Verdict

A simple live-inspection checklist lets you end FAs early and still trust the result.

✓ Read this if BCBAs who run full functional analyses in clinic or school settings.
✗ Skip if Practitioners already using latency-based or trial-based FA variants.

01Research in Context

01

What this study did

The team asked: can we stop a functional analysis early and still get the right answer?

They built a checklist for looking at graphs while the FA is still running.

Then they checked if those live calls matched the final, full FA results.

02

What they found

The checklist calls were almost always right.

Using it let them end the FA after about 12 fewer sessions, saving time without losing accuracy.

03

How this fits with other research

Sunde et al. (2022) took the same checklist and used it on latency-based FAs. They also got 98% agreement, showing the tool works across FA styles.

Curtis et al. (2020) shortened FAs by using quick trial-based IISCAs. Saini et al. shorten by stopping early; both teams reach the same goal—less time, same answer.

Manolov et al. (2015) tested computer aids for reading graphs. Saini’s paper shows a simple paper checklist can be just as trustworthy and far easier to use.

04

Why it matters

You can cut FA time by 40% without guessing. Next time you run an FA, use the structured visual-inspection rules after each session. When the checklist says the pattern is clear, stop. You’ll get your function and free up hours for treatment.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Print the Saini checklist and review your current FA graphs session-by-session; stop when criteria are met.

02At a glance

Intervention
not applicable
Design
other
Finding
strongly positive
Magnitude
large

03Original abstract

Prior research has evaluated the reliability and validity of structured criteria for visually inspecting functional-analysis (FA) results on a post-hoc basis, after completion of the FA (i.e., post-hoc visual inspection [PHVI]; e.g., Hagopian et al., 1997). However, most behavior analysts inspect FAs using ongoing visual inspection (OVI) as the FA is implemented, and the validity of applying structured criteria during OVI remains unknown. In this investigation, we evaluated the predictive validity and efficiency of applying structured criteria on an ongoing basis by comparing the interim interpretations produced through OVI with (a) the final interpretations produced by PHVI, (b) the authors’ post-hoc interpretations (PHAI) reported in the research studies, and (c) the consensus interpretations of these two post-hoc analyses. Ongoing visual inspection predicted the results of PHVI and the consensus interpretations with a very high degree of accuracy, and PHAI with a reasonably high degree of accuracy. Furthermore, the PHVI and PHAI results involved 32 FA sessions, on average, whereas the OVI required only 19 FA sessions to accurately identify the function(s) of destructive behavior (i.e., a 41% increase in efficiency). We discuss these findings relative to other methods designed to increase the accuracy and efficiency of FAs.

Journal of Applied Behavior Analysis, 2018 · doi:10.1002/jaba.450