Assessment & Research

Systematic Review of Descriptions and Justifications Provided for Single-Case Quantification Techniques.

Fingerhut et al. (2023) · Behavior modification 2023
★ The Verdict

Single-case authors rarely explain why they chose their stats—give your reader that reason in your next report.

✓ Read this if BCBAs who write or review single-case studies for journals, theses, or funding.
✗ Skip if Clinicians who only read summaries and never draft methods sections.

01Research in Context

01

What this study did

Joelle and colleagues read 218 single-case papers. They looked at how authors explained their number-crunching steps. They asked, "Did writers say why they picked that formula?"

02

What they found

Most papers used simple overlap rules. Few said why. Descriptions were short and fuzzy. The math was common; the reason was missing.

03

How this fits with other research

Aydin (2024) shows another gap: one-third of SCED sets lose data points, but only five in 100 tell how they filled the holes. Both reviews flag the same flaw—authors skip the method story.

Stolz (1977) warned early that reliability notes were thin. Fingerhut et al. (2023) echo the cry, just for stats instead of raters. The tune is forty years old, still unfinished.

Bell (1999) urged randomization tests to replace eyeball checks. Today most papers still eyeball, and Joelle shows they also forget to name the test. The 1999 wish list stays open.

04

Why it matters

Clear methods let future teams copy, trust, and build on your work. Next time you write, add one line: "We used X index because it tracks Y and ignores Z." That line fills the gap Joelle spotted.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Pick one past graph, open the methods, and add two sentences that justify the overlap or stats you used.

02At a glance

Intervention
not applicable
Design
systematic review
Finding
not reported

03Original abstract

There are currently a multitude of quantification techniques that have been developed for use with single-case designs. As a result, choosing an appropriate quantification technique can be overwhelming and it can be difficult for researchers to properly describe and justify their use of quantification techniques. However, providing clear descriptions and justifications is important for enhancing the credibility of single-case research, and allowing others to evaluate the appropriateness of the quantification technique used. The aim of this systematic literature review is to provide an overview of the quantification techniques that are used to analyze single-case designs, with a focus on the descriptions and justifications that are provided. A total of 290 quantifications occurred across 218 articles, and the descriptions and justifications that were provided for the quantification techniques that were used are systematically examined. Results show that certain quantification techniques, such as the non-overlap indices, are more commonly used. Descriptions and justifications provided for using the quantification techniques are sometimes vague or subjective. Single-case researchers are encouraged to complement visual analysis with the use of quantification techniques for which they can provide objective and appropriate descriptions and justifications, and are encouraged to use tools to guide their choice of quantification techniques.

Behavior modification, 2023 · doi:10.1177/01454455231178469