Assessment & Research

A Priori Justification for Effect Measures in Single-Case Experimental Designs

Manolov et al. (2022) · Perspectives on Behavior Science 2022
★ The Verdict

Pick your single-case effect measure before you see the data—use the authors’ new flowchart to justify the choice and avoid cherry-picking later.

✓ Read this if BCBAs who publish or supervise single-case research.
✗ Skip if Clinicians who only use group designs.

01Research in Context

01

What this study did

Manolov et al. (2022) built a flowchart for single-case researchers.

The chart walks you through picking an effect measure before you collect data.

No participants, no therapy—just a decision tool to stop cherry-picking later.

02

What they found

The flowchart gives yes/no questions that land on one justified measure.

Following it means you can defend your number choice in your method section.

03

How this fits with other research

Ninci (2023) extends the same idea to daily practice.

She adds visual-analysis checks that pair with the flowchart’s pre-selected measure.

Young (2019) came earlier with a Monte Carlo shiny app for p-values.

You can plug the flowchart’s chosen measure into that app to get a p-value that fits behavior-analytic logic.

Together the three papers form a mini-toolkit: pick, justify, then test.

04

Why it matters

Next time you run an A-B-A-B study, open the flowchart first.

Write the chosen measure in your protocol before the first baseline session.

You will write stronger methods sections and reviewers will have one less reason to ask for edits.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Print the flowchart and lock in your next study’s effect measure before you start baseline.

02At a glance

Intervention
not applicable
Design
methodology paper
Finding
not reported

03Original abstract

Due to the complex nature of single-case experimental design data, numerous effect measures are available to quantify and evaluate the effectiveness of an intervention. An inappropriate choice of the effect measure can result in a misrepresentation of the intervention effectiveness and this can have far-reaching implications for theory, practice, and policymaking. As guidelines for reporting appropriate justification for selecting an effect measure are missing, the first aim is to identify the relevant dimensions for effect measure selection and justification prior to data gathering. The second aim is to use these dimensions to construct a user-friendly flowchart or decision tree guiding applied researchers in this process. The use of the flowchart is illustrated in the context of a preregistered protocol. This is the first study that attempts to propose reporting guidelines to justify the effect measure choice, before collecting the data, to avoid selective reporting of the largest quantifications of an effect. A proper justification, less prone to confirmation bias, and transparent and explicit reporting can enhance the credibility of the single-case design study findings.

Perspectives on Behavior Science, 2022 · doi:10.1007/s40614-021-00282-2