Psychometric comparison of the Motivation Assessment Scale (MAS) and the Questions About Behavioral Function (QABF).
Neither the MAS nor the QABF is reliable enough at the item level to determine behavioral function without corroborating data.
01Research in Context
What this study did
The team gave both the MAS and the QABF to the adults with intellectual disability. Staff who knew the clients filled out each scale for the same problem behavior.
They then ran stats to see if the two tools agreed on the behavior's function. They also checked item-level reliability.
What they found
Total scores looked fine, but single items did not match. Kappa values were low, meaning raters often picked different functions.
The scales labeled the same behavior as escape, attention, sensory, or tangible only about half the time.
How this fits with other research
Reid et al. (1999) once showed the QABF was useful in the same ID population. The new data do not cancel that work; they just warn that item-level answers can still drift.
Lancioni et al. (2008) found that even experts disagree when they eyeball FA graphs. Chiviacowsky et al. (2013) echo that message: single-source, single-item data are shaky no matter which tool you use.
Gerber et al. (2011) spotted weak sub-scales in the MBI burnout survey. The same flaw pops up here: overall scale okay, individual items shaky.
Why it matters
Do not trust one quick scale to tell you why a behavior happens. Use the MAS or QABF as a first pass, then back it up with interviews, ABC data, or a brief functional analysis. When results clash, collect more information instead of picking the answer you like.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Pick one function from the MAS or QABF, then probe it with a 5-minute contingency test before you write the BIP.
02At a glance
03Original abstract
BACKGROUND: The Motivation Assessment Scale (MAS) and the Questions About Behavioral Function (QABF) are frequently used to assess the learned function of challenging behaviour in people with intellectual disability (ID). The aim was to explore and compare the psychometric properties of the MAS and the QABF. METHOD: Seventy adults with ID and challenging behaviour and their disability support workers participated in the study. Support workers completed the MAS and QABF regarding a challenging behaviour that they identified as causing most concern. RESULTS: Both measures demonstrated good internal consistency. Based on the intra-class correlation coefficient, inter-rater reliability of the MAS and QABF was acceptable for sub-scale scores, but not for individual items. Convergent validity, as reflected by correlations between functionally analogous scales, was satisfactory, but there was low agreement between the MAS and QABF on the function of challenging behaviour. Factor analysis of the QABF revealed factors that clearly corresponded to the five factors reported by the developers, four of which were well determined. Similar analyses of the MAS yielded a four-factor solution, however, only one factor was well determined. CONCLUSION: The psychometric properties of the MAS and QABF were similar, and item-by-item reliability was problematic. The results suggest that both measures may prove unreliable for assessing the function of challenging behaviour among adults with ID. In developing interventions to address challenging behaviour, other techniques (e.g. observations) should be used to supplement information from these measures.
Journal of intellectual disability research : JIDR, 2013 · doi:10.1111/jir.12022