Assessment & Research

Appraisal of comparative single-case experimental designs for instructional interventions with non-reversible target behaviors: Introducing the CSCEDARS ("Cedars").

Schlosser et al. (2018) · Research in developmental disabilities 2018
★ The Verdict

Keep the one-page CSCEDARS checklist in your desk drawer to quickly rate comparative single-case teaching studies.

✓ Read this if BCBAs who pick curricula or supervise teachers in schools and clinics.
✗ Skip if Practitioners who only run within-subject reversals and never compare two different teaching packages.

01Research in Context

01

What this study did

The authors built a 12-item checklist called CSCEDARS. It helps you judge single-case studies that compare two teaching programs.

Non-reversible skills are things you learn once, like tying shoes. You cannot unteach them to test a second program. The checklist scores study quality in this exact situation.

02

What they found

CSCEDARS gives each study a 0-24 quality score. Higher scores mean stronger proof that one teaching method beats the other.

The paper shows how to fill the form and gives blank copies. No new client data are reported; it is a tool paper.

03

How this fits with other research

Hagopian (2020) came next and added the CCCS design. CCCS is not a rating tool; it is a new way to run the actual study across several clients in a row. Use CSCEDARS to rate those CCCS studies.

Hamama et al. (2021) ask us to add "what clients care about" to any plan. Pair their person-centered logic model with CSCEDARS: first ask clients what counts as success, then use CSCEDARS to rate studies that measure those outcomes.

Bathelt et al. (2019) review AAC teaching studies. Many compare two devices head-to-head. Run those papers through CSCEDARS to see which ones you can trust.

04

Why it matters

Next time you pick an evidence-based curriculum, you can do better than "it looked good in the journal." Pull the article, score it with CSCEDARS in five minutes, and keep only the studies that earn high marks. Your supervisors and funders will see exactly why you chose one reading program over another.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Print the CSCEDARS form, pick a comparative reading study you bookmarked last month, and score it before lunch.

02At a glance

Intervention
not applicable
Design
methodology paper
Population
not specified
Finding
not reported

03Original abstract

Evidence-based practice as a process requires the appraisal of research as a critical step. In the field of developmental disabilities, single-case experimental designs (SCEDs) figure prominently as a means for evaluating the effectiveness of non-reversible instructional interventions. Comparative SCEDs contrast two or more instructional interventions to document their relative effectiveness and efficiency. As such, these designs have great potential to inform evidence-based decision-making. To harness this potential, however, interventionists and authors of systematic reviews need tools to appraise the evidence generated by these designs. Our literature review revealed that existing tools do not adequately address the specific methodological considerations of comparative SCEDs that aim to compare instructional interventions of non-reversible target behaviors. The purpose of this paper is to introduce the Comparative Single-Case Experimental Design Rating System (CSCEDARS, "cedars") as a tool for appraising the internal validity of comparative SCEDs of two or more non-reversible instructional interventions. Pertinent literature will be reviewed to establish the need for this tool and to underpin the rationales for individual rating items. Initial reliability information will be provided as well. Finally, directions for instrument validation will be proposed.

Research in developmental disabilities, 2018 · doi:10.1016/j.ridd.2018.04.028