Assessment & Research

Assessing and enhancing generalization and social validity of social-skills interventions with children and adolescents.

Fox et al. (1993) · Behavior modification 1993
★ The Verdict

Teach the lunchroom, not the clinic—program generalization and measure real-life use, not just happy surveys.

✓ Read this if BCBAs writing social-skills goals for kids who lose the skill at recess.
✗ Skip if Clinicians who only teach in highly controlled 1:1 settings.

01Research in Context

01

What this study did

The authors read every social-skills study they could find. They saw the same hole: kids used the skill in the clinic, then lost it at recess.

The paper tells us to stop hoping generalization will just happen. Instead, build it into the teaching plan and check it with cold, hard data.

02

What they found

Social validity was mostly a smiley-face questionnaire. The team wanted numbers—how often the kid actually gets invited to birthday parties.

Without clear rules, each study measured success differently, so no one could tell what really worked.

03

How this fits with other research

Winett et al. (1991) said the same thing two years earlier: social validity must show real-world impact, not happy parent comments. The 1993 paper adds the "how-to" checklist.

Hong et al. (2018) ran the numbers the target paper asked for. Their meta-analysis shows caregiver coaching helps, but generalization still lags behind maintenance—exactly the gap the target warned about.

Hutchins et al. (2020) found the same split in schools: maintenance effects were almost twice as strong as generalization effects. Three decades later, the problem the target spotted is still alive.

Mann et al. (2020) gives a happy example: they taught college students conversation skills, then tracked generalization to untrained professors and collected social-validity data the target way—proof the blueprint works when you follow it.

04

Why it matters

Next time you write a social-skills goal, add a generalization plan and a social-validity probe. Measure invites, not smiles. If the skill doesn’t survive the lunchroom, it isn’t mastered.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Add one peer lunch table probe and one parent questionnaire that asks "Did your child get invited to a playdate this week?" to your current social-skills program.

02At a glance

Intervention
not applicable
Design
narrative review
Population
mixed clinical
Finding
not reported

03Original abstract

Generalization and social validity are necessary aspects of any applied behavior analytic endeavor. They are especially critical to social-skills training research and practice. Investigators have demonstrated the effectiveness of various learning theory-based interventions in teaching social skills to and increasing peer interactions of children with and without disabilities. However, development of a technology for reliably transferring these changes across different situations or ensuring their persistence over time has proven to be more problematic. From both a conceptual and empirical standpoint, this article reviews progress in and barriers to assessing and enhancing generality of social behavior change and its relationship to social validity. If progress is to be made, then it will be necessary to (a) distinguish between generalization and generality in developing and evaluating social-skills interventions; (b) expand the concept of social validity to give more emphasis to objective measurement of social skills, interventions, and outcomes; and (c) pursue a systematic analysis of generality- and durability-programming tactics.

Behavior modification, 1993 · doi:10.1177/01454455930173006