Using Single-Case Experiments to Support Evidence-Based Decisions: How Much Is Enough?
Count the wins in single-case studies—if enough graphs show success, adopt the intervention.
01Research in Context
What this study did
The authors built a simple rule for busy BCBAs. Count how many single-case studies show the intervention works.
If the win rate feels high enough for your risk level, adopt it. No p-values, no fancy stats.
The paper is a roadmap, not a new experiment.
What they found
They show how to turn a pile of small n graphs into a yes-or-no adoption choice.
The rule keeps your caseload moving while still honoring the data.
How this fits with other research
Fiene et al. (2015) already proved behavior contracts work in kids. You can feed their 18 positive SCEDs into the new success-rate rule to decide if contracts are worth your time.
Bell (1999) and Branch (1999) fought about p-values. The 2016 paper steps around that fight. It uses plain percentages instead of significance tests.
Aydin (2024) warns that many SCED articles have missing data. Run the success-rate check only after you impute the gaps, or you may under-count wins.
Why it matters
You no longer need a stats degree to adopt an evidence-based practice. Stack the single-case studies, count the wins, and set your own cutoff. Start Monday by trying the rule on any intervention folder you already have. If eight of ten client graphs show clear improvement, move that package to the top of your treatment list.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pull the last ten SCED graphs for your target intervention and tally how many show clear improvement—if the percent beats your comfort line, start using it.
02At a glance
03Original abstract
For practitioners, the use of single-case experimental designs (SCEDs) in the research literature raises an important question: How many single-case experiments are enough to have sufficient confidence that an intervention will be effective with an individual from a given population? Although standards have been proposed to address this question, current guidelines do not appear to be strongly grounded in theory or empirical research. The purpose of our article is to address this issue by presenting guidelines to facilitate evidence-based decisions by adopting a simple statistical approach to quantify the support for interventions that have been validated using SCEDs. Specifically, we propose the use of success rates as a supplement to support evidence-based decisions. The proposed methodology allows practitioners to aggregate the results from single-case experiments to estimate the probability that a given intervention will produce a successful outcome. We also discuss considerations and limitations associated with this approach.
Behavior modification, 2016 · doi:10.1177/0145445515613584