Assessment & Research

Methodological standards in single-case experimental design: Raising the bar.

Ganz et al. (2018) · Research in developmental disabilities 2018
★ The Verdict

Use the 2018 checklist to plug the holes reviewers look for in single-case studies.

✓ Read this if BCBAs who write or review single-case research.
✗ Skip if Practitioners who only read finished studies and never run them.

01Research in Context

01

What this study did

McCauley et al. (2018) wrote a checklist for single-case studies.

They list must-have steps and nice-to-have extras.

The goal is to make small-n research easier to trust.

02

What they found

The paper does not give new data.

It gives a recipe: more phases, blind raters, and effect sizes.

Follow the recipe and your study is harder to poke holes in.

03

How this fits with other research

Bell (1999) said, "Stick to visual analysis, skip stats." McCauley et al. (2018) keep visual analysis but add numbers.

Falligant et al. (2022) later tested two new visual aids. They show how to meet the 2018 call for clearer rules.

Landman et al. (2024) built the TLC index. This new number answers the 2018 plea for better effect sizes.

04

Why it matters

Next time you run an ABAB study, open the 2018 checklist first. Add one extra baseline probe or ask a co-worker to score graphs blind. These small moves lift your work from "interesting" to "evidence." Journals, parents, and funders all smile when methods are tight.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Pick one item from the essential list—like double-phasing—and add it to your current study before next session.

02At a glance

Intervention
not applicable
Design
methodology paper
Finding
not reported

03Original abstract

Single-case experimental designs (SCEDs), or small-n experimental research, are frequently implemented to assess approaches to improving outcomes for people with disabilities, particularly those with low-incidence disabilities, such as some developmental disabilities. SCED has become increasingly accepted as a research design. As this literature base is needed to determine what interventions are evidence-based practices, the acceptance of SCED has resulted in increased critiques with regard to methodological quality. Recent trends include recommendations from a number of expert scholars and institutions. The purpose of this article is to summarize the recent history of methodological quality considerations, synthesize the recommendations found in the SCED literature, and provide recommendations to researchers designing SCEDs with regard to essential and aspirational standards for methodological quality. Conclusions include imploring SCED to increase the quality of their experiments, with particular consideration regarding the applied nature of SCED research to be published in Research in Developmental Disabilities and beyond.

Research in developmental disabilities, 2018 · doi:10.1016/j.ridd.2018.03.003