Reconsideration of the use of peer sociometrics for evaluating social-skills training. Implications of an idiographic assessment of temporal stability.
Peer sociometrics swing too much week-to-week to serve as a lone yardstick for single-case social-skills programs.
01Research in Context
What this study did
The team tracked peer sociometric scores for typical elementary students. They looked at how each child’s "liked most" and "liked least" votes changed over weeks.
The goal was to see if these peer ratings stay steady enough to trust in single-case social-skills research.
What they found
Individual scores bounced around a lot. A child could be popular one week and rejected the next.
Because the numbers shift so much, the authors say sociometrics are too shaky to use as outcome measures in one-on-one social-skills programs.
How this fits with other research
Cohen et al. (1993) already warned that social-skills tools need stronger real-world validity. The new data give numbers behind that warning.
Humphries et al. (2009) later echoed the same worry in autism studies. They also said group social-skills research needs richer measures, not just peer votes.
Christopher et al. (1991) used recess peer counts to show big gains from a peer-helper program. Their positive results look convincing, but the 1996 paper says those very counts may have drifted if tracked week to week.
Why it matters
If you run social-skills sessions and plan to judge success with classroom sociometrics, think twice. Add direct observation, teacher checklists, or brief social-probe sessions. Collect each measure more than once before and after teaching. This guards against natural ups and downs in peer votes and gives you firmer proof that your teaching, not random fluctuation, caused any change.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pair any sociometric vote with a 5-minute direct observation of peer conversation and log both at least three times.
02At a glance
03Original abstract
Social-skills training studies using sociometric procedures as dependent measures have often yielded mixed results as to the improvement of the subjects. Failure to document improvement in peer acceptance subsequent to behavior change has led some to question the validity of social-skills interventions, whereas others have questioned the psychometric properties of the measures themselves. This study examined the temporal stability of the two major types of peer measures used in social-skills interventions studies; peer-nomination measures of liking and peer-rating measures of liking. Subjects were 87 children in three fourth-grade and two fifth-grade classrooms. Temporal stability was assessed across time intervals of 2,6, and 8 weeks. Temporal stability was examined as it traditionally has been at the group level (using Pearson product-moment correlations), and at the level at which data are normally examined for change in social-skills interventions, at the level of the individual child (using phi and Cramer's V coefficients). Assessed at the group level, the three types of peer measures were generally moderately to highly stable. Stability coefficients for individual children's scores on the peer measures, however, indicated instability at the level of the individual child. These problems regarding stability at the individual, idiographic level may be especially relevant when sociometric procedures are used as dependent measures in individual subject design studies. Conceptual and practical implications of the findings for the assessment of social-skills interventions are discussed.
Behavior modification, 1996 · doi:10.1177/01454455960203003