Assessment & Research

Applying the Taxonomy of Validity Threats from Mainstream Research Design to Single-Case Experiments in Applied Behavior Analysis

Petursdottir et al. (2018) · Behavior Analysis in Practice 2018
★ The Verdict

Use the translated validity-threat checklist when planning single-case studies to preempt design flaws and explain your controls to non-ABA reviewers.

✓ Read this if BCBAs who write or review single-case research
✗ Skip if Practitioners who only consume finished studies

01Research in Context

01

What this study did

Petursdottir et al. (2018) took the long list of validity threats used in group research. They rewrote each threat so it makes sense in single-case ABA studies.

The paper gives a ready-made checklist you can paste into your study plan. It also shows how to explain your design choices to journal reviewers who were trained in group methods.

02

What they found

The authors mapped 21 mainstream threats onto single-case logic. Each threat keeps its original name but gets new examples from ABA work.

The checklist helps you spot weak spots before data collection starts.

03

How this fits with other research

Slocum et al. (2022) and Kratochwill et al. (2022) both extend the 2018 map. They zoom in on one threat—history in nonconcurrent multiple-baseline designs—and give fixes like random start points and extra tiers.

Lepper et al. (2023) use the same checklist idea, but for ABA graduate programs. They show how to pick program features that raise BCBA pass rates.

Vassos et al. (2023) test a simplified quality tool and find that shortcuts hurt reliability. Their warning pairs well with Petursdottir’s call for full validity checks instead of quick fixes.

04

Why it matters

Next time you plan a single-case study, open the validity checklist first. Circle every threat that could bite your design. Add a line in your method section that says how you handled each one. Reviewers from outside ABA will instantly see why your study is solid.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Add the validity-threat table as a template in your next research proposal folder.

02At a glance

Intervention
not applicable
Design
methodology paper
Finding
not reported

03Original abstract

Mainstream research design in the social and behavioral sciences has often been conceptualized using a taxonomy of threats to experimental validity first articulated by Campbell and his colleagues (Campbell & Stanley, 1966; Cook & Campbell, 1979). The most recent update of this framework was published by Shadish, Cook, and Campbell (2002), in which the authors describe different types of validity and numerous threats to each primarily in terms of group-design experiments. In the present article, we apply Shadish et al.’s analysis of threats to internal, external, statistical conclusion, and construct validity to single-case experimental research as it is typically conducted in applied behavior analysis. In doing so, we hope to provide researchers and educators in the field with a translation of the validity-threats taxonomy into terms and considerations relevant to the design and interpretation of applied behavior-analytic research for the purposes of more careful research design and the ability to communicate our designs to individuals outside of behavior analysis, using their own vocabulary.

Behavior Analysis in Practice, 2018 · doi:10.1007/s40617-018-00294-6