Methodological standards in single-case experimental design: Raising the bar.
Use the 2018 checklist to plug the holes reviewers look for in single-case studies.
01Research in Context
What this study did
McCauley et al. (2018) wrote a checklist for single-case studies.
They list must-have steps and nice-to-have extras.
The goal is to make small-n research easier to trust.
What they found
The paper does not give new data.
It gives a recipe: more phases, blind raters, and effect sizes.
Follow the recipe and your study is harder to poke holes in.
How this fits with other research
Bell (1999) said, "Stick to visual analysis, skip stats." McCauley et al. (2018) keep visual analysis but add numbers.
Falligant et al. (2022) later tested two new visual aids. They show how to meet the 2018 call for clearer rules.
Landman et al. (2024) built the TLC index. This new number answers the 2018 plea for better effect sizes.
Why it matters
Next time you run an ABAB study, open the 2018 checklist first. Add one extra baseline probe or ask a co-worker to score graphs blind. These small moves lift your work from "interesting" to "evidence." Journals, parents, and funders all smile when methods are tight.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pick one item from the essential list—like double-phasing—and add it to your current study before next session.
02At a glance
03Original abstract
Single-case experimental designs (SCEDs), or small-n experimental research, are frequently implemented to assess approaches to improving outcomes for people with disabilities, particularly those with low-incidence disabilities, such as some developmental disabilities. SCED has become increasingly accepted as a research design. As this literature base is needed to determine what interventions are evidence-based practices, the acceptance of SCED has resulted in increased critiques with regard to methodological quality. Recent trends include recommendations from a number of expert scholars and institutions. The purpose of this article is to summarize the recent history of methodological quality considerations, synthesize the recommendations found in the SCED literature, and provide recommendations to researchers designing SCEDs with regard to essential and aspirational standards for methodological quality. Conclusions include imploring SCED to increase the quality of their experiments, with particular consideration regarding the applied nature of SCED research to be published in Research in Developmental Disabilities and beyond.
Research in developmental disabilities, 2018 · doi:10.1016/j.ridd.2018.03.003