Assessment & Research

Using Single-Case Experiments to Support Evidence-Based Decisions: How Much Is Enough?

Lanovaz et al. (2016) · Behavior modification 2016
★ The Verdict

Count the wins in single-case studies—if enough graphs show success, adopt the intervention.

✓ Read this if BCBAs who pick interventions from small-n journals.
✗ Skip if Clinicians who only use large group trials.

01Research in Context

01

What this study did

The authors built a simple rule for busy BCBAs. Count how many single-case studies show the intervention works.

If the win rate feels high enough for your risk level, adopt it. No p-values, no fancy stats.

The paper is a roadmap, not a new experiment.

02

What they found

They show how to turn a pile of small n graphs into a yes-or-no adoption choice.

The rule keeps your caseload moving while still honoring the data.

03

How this fits with other research

Fiene et al. (2015) already proved behavior contracts work in kids. You can feed their 18 positive SCEDs into the new success-rate rule to decide if contracts are worth your time.

Bell (1999) and Branch (1999) fought about p-values. The 2016 paper steps around that fight. It uses plain percentages instead of significance tests.

Aydin (2024) warns that many SCED articles have missing data. Run the success-rate check only after you impute the gaps, or you may under-count wins.

04

Why it matters

You no longer need a stats degree to adopt an evidence-based practice. Stack the single-case studies, count the wins, and set your own cutoff. Start Monday by trying the rule on any intervention folder you already have. If eight of ten client graphs show clear improvement, move that package to the top of your treatment list.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Pull the last ten SCED graphs for your target intervention and tally how many show clear improvement—if the percent beats your comfort line, start using it.

02At a glance

Intervention
not applicable
Design
methodology paper
Finding
not reported

03Original abstract

For practitioners, the use of single-case experimental designs (SCEDs) in the research literature raises an important question: How many single-case experiments are enough to have sufficient confidence that an intervention will be effective with an individual from a given population? Although standards have been proposed to address this question, current guidelines do not appear to be strongly grounded in theory or empirical research. The purpose of our article is to address this issue by presenting guidelines to facilitate evidence-based decisions by adopting a simple statistical approach to quantify the support for interventions that have been validated using SCEDs. Specifically, we propose the use of success rates as a supplement to support evidence-based decisions. The proposed methodology allows practitioners to aggregate the results from single-case experiments to estimate the probability that a given intervention will produce a successful outcome. We also discuss considerations and limitations associated with this approach.

Behavior modification, 2016 · doi:10.1177/0145445515613584