Assessment & Research

On the Need to Address Serial Dependency When Dealing with Data from Single-Case Experimental Designs

Tanious et al. (2026) · Journal of Behavioral Education 2026
★ The Verdict

Check and report serial dependency in every SCED or risk reading your graphs wrong.

✓ Read this if BCBAs who design, graph, or review single-case experiments in any setting.
✗ Skip if Practitioners who only use group designs or never touch data analysis.

01Research in Context

01

What this study did

Tanious and colleagues wrote a position paper, not an experiment. They looked at how single-case graphs are used today. They asked: are we ignoring a basic math problem called serial dependency?

Serial dependency means each data point is linked to the one before it. The authors say most SCED papers skip this check. They urge the field to test and report it every time.

02

What they found

Ignoring serial dependency can trick you. A graph may look like the treatment worked when it is just carry-over from yesterday’s score. The team shows how this inflates error and hides real trends.

They give step-by-step rules to spot the problem and fix it before you graph or run stats.

03

How this fits with other research

Goldman et al. (2024) found that users give the PDC-HS in sloppy, inconsistent ways. Tanious et al. make the same point about SCED stats: sloppy equals wrong. Both papers push for one clear recipe everyone follows.

Old reviews like Cook (2002) and Feldman et al. (1999) pooled SCEDs from years when no one checked serial dependency. Tanious says we should reopen those data sets and rerun the numbers. The new paper does not contradict the old findings; it just says our confidence in them should drop until we correct for autocorrelation.

Spackman et al. (2023) split "insistence on sameness" into three parts to get cleaner measures. Tanious wants us to split autocorrelation from trend for the same reason: finer grain, truer picture.

04

Why it matters

If you run or review SCEDs, start adding a serial-dependency test to your routine. It takes five minutes in free software. Catching it early can save you from chasing fake effects or missing real ones. Cleaner data mean sharper decisions for your clients and stronger manuscripts for publication.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Run the Durbin-Watson test on your last SCED graph and note the result in the caption.

02At a glance

Intervention
not applicable
Design
theoretical
Finding
not reported

03Original abstract

Abstract The use of single-case experimental designs (SCEDs) for assessing intervention effectiveness in educational sciences and related fields has risen sharply in recent years. With it, the number of available visual and statistical analysis methods has spiked as well. The application and interpretation of results obtained via these methods might, however, be affected in different degrees by the presence of serial dependency in SCED data. The effect that serial dependency has on the application of statistical methods to SCED data might vary depending on the exact method used. In addition, the amount of serial dependency in SCED data might depend on design and measurement choices. However, to date, no concise answer has been found about the extent of serial dependency and its moderators in SCED data. In the present commentary, we give an overview of the state of the art on serial dependency research in SCEDs, argue that serial dependency should be routinely assessed when reporting the results of SCEDs, and sketch an agenda for serial dependency research. Addressing serial dependency can be facilitated by the inclusion of serial dependency as a data feature in standard reporting tools, which will ultimately lead to improved open science practices in the SCED community.

Journal of Behavioral Education, 2026 · doi:10.1007/s10864-026-09624-z