Assessment & Research

A Description of Missing Data in Single-Case Experimental Designs Studies and an Evaluation of Single Imputation Methods.

Aydin (2024) · Behavior modification 2024
★ The Verdict

Fill missing SCED data points with simple single-impute methods instead of deleting sessions to protect your effect-size story.

✓ Read this if BCBAs who graph single-case data in clinics or schools.
✗ Skip if Practitioners who only run group designs.

01Research in Context

01

What this study did

Aydin (2024) looked at what happens when data points are missing in single-case graphs.

The paper tested simple fill-in tricks instead of tossing the whole session.

No new kids or clients were studied—just old graphs with holes.

02

What they found

Dropping sessions with gaps can shrink your effect size.

Quick single-impute fixes keep the visual story closer to the truth.

03

How this fits with other research

Aydin (2024) extends the earlier Aydin (2024) survey that saw missing data in 30 % of 465 SCED papers.

That survey showed the problem is common; the new paper shows how to fix it.

Stolz (1977) once warned that half of JABA studies lacked solid reliability—another case of hidden data flaws.

Together the message is clear: we keep skipping basic quality steps until someone audits.

04

Why it matters

Next time a session sheet is blank, do not delete the point.

Plug the hole with a basic single-impute method and keep the line intact.

Your visual analysis stays honest and your effect size stays real.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

If a data point is missing, use the last-value-carried-forward before you graph.

02At a glance

Intervention
not applicable
Design
methodology paper
Finding
not reported

03Original abstract

Missing data is inevitable in single-case experimental designs (SCEDs) studies due to repeated measures over a period of time. Despite this fact, SCEDs implementers such as researchers, teachers, clinicians, and school psychologists usually ignore missing data in their studies. Performing analyses without considering missing data in an intervention study using SCEDs or a meta-analysis study including SCEDs studies in a topic can lead to biased results and affect the validity of individual or overall results. In addition, missingness can undermine the generalizability of SCEDs studies. Considering these drawbacks, this study aims to give descriptive and advisory information to SCEDs practitioners and researchers about missing data in single-case data. To accomplish this task, the study presents information about missing data mechanisms, item level and unit level missing data, planned missing data designs, drawbacks of ignoring missing data in SCEDs, and missing data handling methods. Since single imputation methods among missing data handling methods do not require complicated statistical knowledge, are easy to use, and hence are more likely to be used by practitioners and researchers, the present study evaluates single imputation methods in terms of intervention effect sizes and missing data rates by using a real and hypothetical data sample. This study encourages SCEDs implementers, and also meta-analysts to use some of the single imputation methods to increase the generalizability and validity of the study results in case they encounter missing data in their studies.

Behavior modification, 2024 · doi:10.1177/01454455241226879