A reliability study of the Spanish version of the social behaviour schedule (SBS) in a population of adults with learning disabilities.
The Spanish SBS is stable over time and across raters, but different staff see different behavior—so cross-check before you act.
01Research in Context
What this study did
Bacon et al. (1998) checked how well the Spanish Social Behaviour Schedule (SBS) works.
They asked two raters to score the same adults with intellectual disability twice.
The adults worked in community vocational programs in Spain.
What they found
Test-retest and inter-rater numbers were good.
Different informants gave very different scores.
One staff member’s report is not enough; you need a second check.
How this fits with other research
Ono (1996) got strong inter-rater numbers with the Japanese ABC-Community in similar adults.
Konstantareas et al. (1999) also found solid test-retest on the SB-IV in the same group.
The poor inter-informant match in L et al. looks like a clash, but it is not.
Y and M worked in quiet labs with trained testers.
L worked in busy worksites where staff see different slices of the day.
The setting, not the tool, drives the gap.
Why it matters
If you use the Spanish SBS in day programs, always pull a second rater or check past notes.
One staff view can miss half the picture.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pick one client on your caseload, have two staff fill the Spanish SBS independently, then meet to compare items that differ by more than one point.
02At a glance
03Original abstract
The reliability of the Spanish version of the Social Behaviour Schedule (SBS) was tested in a vocational setting on a sample of 64 subjects with learning disabilities. Test-retest assessment showed a good percentage of agreement (80%) and adequate kappa values for most SBS items. The overall percentage of agreement of inter-rater reliability was 85% and kappa values were moderate to nearly perfect for 52% of items. Inter-informant analyses produced poorer results, with an average agreement of 43% and inadequate kappa values on 42% of items. The intraclass correlation coefficient (ICC) was 0.64 for test-retest, 0.76 for inter-rater assessment and 0.94 for inter-informant assessment. The Spearman correlation coefficient was adequate on the test-retest and inter-rater analyses, but not on inter-informant analysis. This low inter-informant agreement could be attributed to environmental factors which alter the reliability of reports from different informants in community settings with high levels of normalization. In such environments, an interview with a key informant may not suffice, and both a careful review of the clinical record and a direct interview with subjects may enhance the reliability of the information attained.
Journal of intellectual disability research : JIDR, 1998 · doi:10.1046/j.1365-2788.1998.00070.x