Assessment & Research

Characteristics of Missing Data in Single-Case Experimental Designs: An Investigation of Published Data.

Aydin (2024) · Behavior modification 2024
★ The Verdict

Plan to patch missing data before you graph; one-third of SCED papers don’t, and it quietly warps the picture.

✓ Read this if BCBAs who write or read single-case studies.
✗ Skip if Clinicians who only run permanent-product probes with no time-series.

01Research in Context

01

What this study did

Orhan read 465 single-case papers from 2020-2023. He counted how many data points were blank.

He also checked if authors said how they handled the blanks.

02

What they found

One in every three papers had empty cells.

When data were missing, the gap was often bigger than 10 %.

Only 5 % of teams told the reader how they filled the gaps.

03

How this fits with other research

Lemons et al. (2015) looked at 30 years of JEAB and saw more and more stats each year. Orhan shows the next problem: even when authors now run numbers, they skip the step that saves those numbers from holes.

Branch (1999) told us to stop worshipping p-values. Orhan agrees and adds: if you must use stats, first patch the data or the p-value is junk.

Wang et al. (2021) proved Sidman’s ideas still rule JEAB citations. The new audit says we still copy his graphs, but we forget his rule—every data point counts—because we let blanks stay blank.

04

Why it matters

Missing slices can flip your phase mean and hide an effect. Before you graph, open a free imputation tool and fill the holes. State what you did in the paper so the next BCBA can copy it.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Open yesterday’s Excel sheet, flag every blank cell, and use mean-of-phase interpolation to fill it before you draw the line.

02At a glance

Intervention
not applicable
Design
systematic review
Sample size
465
Finding
not reported

03Original abstract

Single-case experimental designs (SCEDs) have grown in popularity in the fields such as education, psychology, medicine, and rehabilitation. Although SCEDs are valid experimental designs for determining evidence-based practices, they encounter some challenges in analyses of data. One of these challenges, missing data, is likely to be occurred frequently in SCEDs research due to repeated measurements over time. Since missing data is a critical factor that can weaken the validity and generalizability of a study, it is important to determine the characteristics of missing data in SCEDs, which are especially conducted with a small number of participants. In this regard, this study aimed to describe missing data features in SCEDs studies in detail. To accomplish this goal, 465 published SCEDs studies within the recent 5 years in six journals were included in the investigation. The overall results showed that the prevalence of missing data among SCEDs articles in at least one phase, as at least one data point, was approximately 30%. In addition, the results indicated that the missing data rates were above 10% within most studies where missing data occurred. Although missing data is so common in SCEDs research, only a handful of studies (5%) have handled missing data; however, their methods are traditional. In analyzing SCEDs data, several methods are proposed considering missing data ratios in the literature. Therefore, missing data rates determined in this study results can shed light on the analyses of SCEDs data with proper methods by improving the validity and generalizability of study results.

Behavior modification, 2024 · doi:10.1177/01454455231212265