A systematic review of social‐validity assessments in the <i>Journal of Applied Behavior Analysis</i>: 2010–2020
Social-validity reporting in JABA stayed flat at about one in five studies for thirty years.
01Research in Context
What this study did
Leif et al. (2024) read every article in the Journal of Applied Behavior Analysis from 2010 to 2020. They hunted for any mention of social validity. They coded who answered, what questions were asked, and when the data were taken.
The team found 160 studies that talked about social validity. They wrote down the tools used and the timing of each check.
What they found
Only 18% of JABA studies reported social-validity data. When authors did check, they mostly used Likert scales after the study ended. They asked if people liked the procedure, not if the goals or outcomes mattered.
Few studies asked clients before treatment started. Even fewer asked for input during treatment.
How this fits with other research
Kennedy (1992) saw the same gap thirty years earlier. That review found only 20% of JABA papers reported social validity. The numbers barely moved, so the field is still skipping the same step.
Cohen et al. (2018) looked at single-case special-ed journals from 2005-2016. They also found low reporting. Together, the three reviews show the problem is wide, not just inside JABA.
Huntington et al. (2024) asked a deeper question: who gets left out? Their 2024 review shows most checks ignore people with disabilities. Leif’s numbers set the stage; Huntington shows why it hurts.
Why it matters
If you write or read JABA articles, treat social validity like any dependent variable. Measure before, during, and after. Ask clients, not just caregivers. Use simple questions: “Is this goal worth it?” and “Did life get better?” Add the answers to your paper so the next BCBA can see what real people thought.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Add a three-question pre-treatment social-validity check to your next intake packet.
02At a glance
03Original abstract
We conducted a systematic review of studies published in the Journal of Applied Behavior Analysis between 2010 and 2020 to identify reports of social validity. A total of 160 studies (17.60%) published during this time included a measure of social validity. For each study, we extracted data on (a) the dimensions of social validity, (b) the methods used for collecting social-validity data, (c) the respondents, and (d) when social-validity data were collected. Most social-validity assessments measured the acceptability of intervention procedures and outcomes, with fewer evaluating goals. The most common method for collecting social validity data was Likert-type rating scales, followed by non-Likert-type questionnaires. In most studies, the direct recipients of the intervention provided feedback on social validity. Social-validity assessment data were often collected at the conclusion of the study. We provide examples of social-validity measurement methods, discuss their strengths and limitations, and provide recommendations for improving the future collection and reporting of social-validity data.
Journal of Applied Behavior Analysis, 2024 · doi:10.1002/jaba.1092