Assessment & Research

Further Analysis of Advanced Quantitative Methods and Supplemental Interpretative Aids with Single-Case Experimental Designs

Falligant et al. (2022) · Perspectives on Behavior Science 2022
★ The Verdict

Draw conservative dual-criteria and compute fail-safe k to make your single-case visual analysis more reliable.

✓ Read this if BCBAs who review or publish single-case graphs in clinics or schools.
✗ Skip if RBTs who only collect data and do not interpret graphs.

01Research in Context

01

What this study did

Falligant et al. (2022) compared two visual-analysis aids for single-case graphs.

They looked at conservative dual-criteria and fail-safe k.

The paper shows how to draw and read each aid step-by-step.

02

What they found

Both tools help you see if the behavior really changed.

Conservative dual-criteria keeps you from saying "it worked" too soon.

Fail-safe k tells you how many extra data points would be needed to flip the finding.

03

How this fits with other research

Landman et al. (2024) extends this work by adding a new index called TLC.

Use TLC when you have very few data points and want a third check alongside NAP and Tau-U.

Bell (1999) seems to disagree, saying "just use your eyes and replication, skip the math."

The two views fit together: let the aids guide your eye, then replicate to be sure.

Frank‐Crawford et al. (2026) used the same compare-two-methods style but with CSA instead of visual aids.

04

Why it matters

Next time you graph a client’s data, draw the conservative dual-criteria lines before you decide.

If the last three points do not beat the upper line, hold off on celebrating.

Add fail-safe k when the team wants to know "how sure are we?"

These two quick steps make your visual analysis clearer and more believable to others.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Print the client’s graph, add conservative dual-criteria lines, and check if the last three points beat the upper boundary.

02At a glance

Intervention
not applicable
Design
methodology paper
Finding
not reported

03Original abstract

Reliable and accurate visual analysis of graphically depicted behavioral data acquired using single-case experimental designs (SCEDs) is integral to behavior-analytic research and practice. Researchers have developed a range of techniques to increase reliable and objective visual inspection of SCED data including visual interpretive guides, statistical techniques, and nonstatistical quantitative methods to objectify the visual-analytic interpretation of data to guide clinicians, and ensure a replicable data interpretation process in research. These structured data analytic practices are now more frequently used by behavior analysts and the subject of considerable research within the field of quantitative methods and behavior analysis. First, there are contemporaneous analytic methods that have preliminary support with simulated datasets, but have not been thoroughly examined with nonsimulated clinical datasets. There are a number of relatively new techniques that have preliminary support (e.g., fail-safe k), but require additional research. Other analytic methods (e.g., dual-criteria and conservative dual criteria) have more extensive support, but have infrequently been compared against other analytic methods. Across three studies, we examine how these methods corresponded to clinical outcomes (and one another) for the purpose of replicating and extending extant literature in this area. Implications and recommendations for practitioners and researchers are discussed.

Perspectives on Behavior Science, 2022 · doi:10.1007/s40614-021-00313-y