How to Be RAD: Repeated Acquisition Design Features that Enhance Internal and External Validity
Grab the 15-item RAD checklist to build tough, balanced task sets and show clear learning curves.
01Research in Context
What this study did
Kirby et al. (2021) wrote a how-to paper on Repeated Acquisition Designs. These designs track how fast a learner masters new tasks each session.
The team listed 15 quality checks. They cover picking tasks, mixing them up, and showing the data clearly.
What they found
The paper gives a free checklist. It tells you to use counterbalanced tasks so no one set is easier.
It also says graph the learning curve every day. That lets you see if the skill really improves.
How this fits with other research
Frazier et al. (2018) built a similar tool called CSCEDARS for comparing two teaching methods. Both papers give checklists, but RAD is for daily learning curves while CSCEDARS is for picking the better program.
Reichow et al. (2018) made a risk-of-bias rubric for any single-case study. Use their tool after you run the study, then use Kirby’s 15 points while you plan it.
Hatfield et al. (2019) wrote a broad SCRD primer. Kirby et al. zoom in on one design and add finer details like counterbalancing stimuli.
Why it matters
If you test daily learning, the 15-item RAD list keeps your study tight. Print the checklist, build three equivalent task sets, and rotate them across sessions. Reviewers love seeing that level of care.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Make three versions of your teaching task that are equally hard, then rotate them each session.
02At a glance
03Original abstract
The Repeated Acquisition Design (RAD) is a type of single-case research design (SCRD) that involves repeated and rapid measurement of irreversible discrete skills or behaviors through pre-and postintervention probes across different sets of stimuli. Researchers interested in the study of learning in animals and humans have used the RAD because of its sensitivity to detect immediate changes in rate or accuracy. Despite its strengths, critics of the RAD have cautioned against its use due to reasonable threats to internal validity like pretest effects, history, and maturation. Furthermore, many methodologists and researchers have neglected the RAD in their SCRD standards (e.g., What Works Clearinghouse [WWC], 2020; Horner et al., 2005). Unless given guidance to address threats to internal validity, researchers may avoid the design altogether or continue to use a weak version of the RAD. Therefore, we propose a set of 15 quality RAD indicators, comprising foundational elements that should be present in all RAD studies and additional features that enhance causal inference and external validity. We review contemporary RAD use and describe how the additional features strengthen the rigor of RAD studies. We end the article with suggested guidelines for interpreting effects and the strength of the evidence generated by RAD studies. We invite researchers to use these initial guidelines as a jumping off point for a more RAD future.
Perspectives on Behavior Science, 2021 · doi:10.1007/s40614-021-00301-2