Assessment & Research

A Preliminary Evaluation of Interrater Reliability and Concurrent Validity of Open-Ended Indirect Assessment

Saini et al. (2020) · Behavior Analysis in Practice 2020
★ The Verdict

Open caregiver interviews help teams agree on a first guess, but you still need a quick FA to catch the other half of functions they miss.

✓ Read this if BCBAs who run caregiver interviews before functional analyses in clinic or home programs.
✗ Skip if Practitioners who already use only trial-based or full FAs and skip interviews.

01Research in Context

01

What this study did

Saini and colleagues tried a new caregiver interview. Instead of yes-no questions, they let parents talk freely.

They asked two teams to score the same open-ended stories. Then they checked if the teams agreed with later functional analyses.

02

What they found

The open format helped raters agree. Inter-rater reliability beat older checklists.

Still, only half of the free-form guesses matched the real FA results. The other half pointed to the wrong function.

03

How this fits with other research

Contreras et al. (2023) pooled many studies and saw the same 50-50 hit rate. Their big review shows the problem is not the tool; it is the method. Structured descriptive tools did a little better, yet the ceiling stays near chance.

Nicholson et al. (2006) tested the QABF scale and also found so-so inter-rater numbers. Saini moves past their fixed items and shows that letting parents talk improves rater harmony, but it does not lift validity.

DeRoma et al. (2004) warned that teachers, students, and observers rarely pick the same function. Saini echoes that warning with caregiver talk instead of school forms.

04

Why it matters

Use the open interview when you need a team to agree on a starting idea. The chat builds shared language and cuts rater drift. Just do not stop there. Follow every open interview with a brief functional analysis or direct observation to test the guess. Treat the 50% match rate as a reminder to verify, not a flaw in the story.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Keep the open interview, then run a three-condition functional analysis to double-check the top guess.

02At a glance

Intervention
functional behavior assessment
Design
single case other
Population
mixed clinical
Finding
mixed

03Original abstract

Indirect assessments are a commonly used component of functional behavior assessment by behavior analysts in practice who work with individuals with severe behavior disorders. Although used frequently, closed-ended indirect assessments have repeatedly been shown to have low to moderate interrater reliability and poor concurrent validity with functional analysis. Recently, the use of open-ended interviews has become more commonly adopted in applied clinical practice, despite no studies evaluating the psychometric properties of such assessments. In the present study, we evaluated the interrater reliability and concurrent validity of an open-ended functional assessment interview. We compared the results of two open-ended indirect assessments conducted with a common caregiver and subsequently conducted functional analyses in an attempt to validate hypotheses generated from the interviews. Interrater agreement for the open-ended interviews was higher than previous research on closed-ended interviews (75%); however, concurrent validity with functional analysis was relatively poor (50%). We discuss these findings in the context of assessment and treatment for severe behavior disorders, as well as best practice methods during functional behavior assessment.

Behavior Analysis in Practice, 2020 · doi:10.1007/s40617-019-00364-3