Single-case synthesis tools I: Comparing tools to evaluate SCD quality and rigor.
The quality score you give a single-case study depends on which checklist you use, so always name the tool when you label an intervention evidence-based.
01Research in Context
What this study did
Davis et al. (2018) lined up three popular checklists used to judge single-case studies.
They applied each tool to the same pile of past sensory-intervention articles.
The goal: see if the tools give the same thumbs-up or thumbs-down on quality.
What they found
The checklists did not agree. One tool called a study "strong" while another called the same study "weak."
Because of the mismatch, a paper could slide in or drop out of an evidence-based practice list depending on the tool you pick.
How this fits with other research
Gilroy et al. (2025) extends this work. Their free SCARF-UI website bakes visual checks into the review so you can see why a study passes or fails instead of trusting a hidden checklist score.
Colombo et al. (2025) shift the spotlight forward in time. Their pick-a-design table helps you choose the right single-case plan before data start, while Davis et al. (2018) helps you judge plans already done.
Prasher et al. (2007) rings the same warning bell in functional-assessment land. They showed that picking an experimental FA versus a quick interview changes how well treatment works—just like picking a quality tool changes whether you call an intervention evidence-based.
Why it matters
If you lead journal clubs, write EBP guidelines, or sit on thesis committees, know that your chosen checklist can flip the verdict. Run any key article through two tools and compare the scores. If they clash, dig into the items that differ and report both ratings. This small step keeps your evidence reviews honest and transparent.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pick one recent single-case article from your caseload, score it with two free checklists, and note any item that flips from pass to fail.
02At a glance
03Original abstract
Tools for evaluating the quality and rigor of single case research designs (SCD) are often used when conducting SCD syntheses. Preferred components include evaluations of design features related to the internal validity of SCD to obtain quality and/or rigor ratings. Three tools for evaluating the quality and rigor of SCD (Council for Exceptional Children, What Works Clearinghouse, and Single-Case Analysis and Design Framework) were compared to determine if conclusions regarding the effectiveness of antecedent sensory-based interventions for young children changed based on choice of quality evaluation tool. Evaluation of SCD quality differed across tools, suggesting selection of quality evaluation tools impacts evaluation findings. Suggestions for selecting an appropriate quality and rigor assessment tool are provided and across-tool conclusions are drawn regarding the quality and rigor of studies. Finally, authors provide guidance for using quality evaluations in conjunction with outcome analyses when conducting syntheses of interventions evaluated in the context of SCD.
Research in developmental disabilities, 2018 · doi:10.1016/j.ridd.2018.02.003