Assessment & Research

Assessing Nonoverlap in Single-Case Data: Strengths, Challenges, and Recommendations

Manolov et al. (2025) · Journal of Behavioral Education 2025
★ The Verdict

Use the authors’ free web app to generate clearer NAP graphs and extra overlap metrics before you next interpret single-case data.

✓ Read this if BCBAs who graph single-case data for treatment decisions or reports.
✗ Skip if Practitioners who only use group designs and never touch SCED graphs.

01Research in Context

01

What this study did

Manolov et al. (2025) looked hard at the Nonoverlap of All Pairs (NAP) index. NAP is a common way to score how much single-case data points in treatment sit above the highest baseline point.

The team built a free web tool. It draws cleaner graphs and adds extra overlap numbers so you can judge effects faster.

02

What they found

The paper lists three main flaws in plain NAP: it hides ties, it ignores trend, and it can look stronger than it really is.

The authors give step-by-step fixes and show how the new graphs make the flaws visible at a glance.

03

How this fits with other research

Meyer (1999) told us we do not need p-values if the visual story is clear. Manolov et al. (2025) agree and hand us sharper pictures to tell that story.

White et al. (2021) found messy methods across preference-displacement studies. The new NAP tool answers part of that mess by giving one standard way to score nonoverlap.

Eriksson et al. (2010) warned that weak methods sneak into screening studies. The 2025 paper repeats the warning for SCEDs and adds a free fix.

04

Why it matters

Next time you run a single-case study, paste your data into the free NAP app before you call the intervention a success. The clearer plot may show overlap you missed and save you from over-selling weak effects to parents or funders.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Test the free NAP tool with your last client’s baseline and treatment data; compare the new graph to your original one and note any hidden overlap.

02At a glance

Intervention
not applicable
Design
methodology paper
Finding
not reported

03Original abstract

Abstract Overlap is one of the data aspects that are expected to be assessed when visually inspecting single-case experimental designs (SCED) data. A frequently used quantification of overlap is the Nonoverlap of All Pairs (NAP). The current article reviews the main strengths and challenges when using this index, as compared to other nonoverlap indices such as Tau and the Percentage of data points exceeding the median. Four challenges are reviewed: the difficulty in representing NAP graphically, the presence of a ceiling effect, the disregard of trend, and the limitations in using p -values associated with NAP. Given the importance of complementing quantitative analysis and visual inspection of graphed data, straightforward quantifications and new graphical elements for the time-series plot are proposed as options for addressing the first three challenges. The suggestions for graphical representations (representing within-phase monotonic trend and across-phases overlaps) and additional numerical summaries (quantifying the degree of separation in case of complete nonoverlap or the proportion of data points in the overlap zone) are illustrated with two multiple-baseline data sets. To make it easier to obtain the plots and quantifications, the recommendations are implemented in a freely available user-friendly website. Educational researchers can use this article to inform their use and application of NAP to meaningfully interpret this quantification in the context of SCEDs.

Journal of Behavioral Education, 2025 · doi:10.1007/s10864-024-09552-w