Toward an Automation of Functional Analysis Interpretation: A Proof of Concept.
An R script that follows the Roane checklist can call 4 out of 5 FA graphs the same way experts do.
01Research in Context
What this study did
Morantz et al. (2022) built an R script that reads FA graphs. The script uses the same checklist experts use to decide if a condition controls problem behavior.
They ran the script on a batch of FA graphs. Three seasoned BCBAs scored the same graphs by hand. The team compared the two sets of calls.
What they found
The computer matched the experts 81 percent of the time. That is the same hit rate you see when two experts check each other’s work.
The script never gets tired. It scored every graph in seconds.
How this fits with other research
Cox et al. (2021) already showed that giving novices the Roane checklist boosts accuracy. Alison et al. simply let the computer follow that same checklist.
Guerrero et al. (2022) warn that the checklist works less well for mealtime FA. The 81 percent agreement in Alison’s study is for general FA; mealtime graphs may still need your eyes.
Machado et al. (2021) trained observers to score behavior videos at five-times speed. Alison skips training people and lets code do the scoring instead.
Why it matters
You can drop an FA graph into the script and get a first pass before you bill time for review. If the call is unclear, you still have the visual checklist and your own eyes. Use the tool to triage caseloads, not to replace clinical judgment.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Run your last five FA graphs through the free script and note any graphs where you disagree.
02At a glance
03Original abstract
The advent of functional analysis (FA) methodology paved the way for improved function-based behavioral interventions and ultimately client outcomes. Behavior analysts primarily rely on visual inspection to interpret FA results. However, the literature suggests interpretations may vary across raters resulting in poor interobserver agreement (IOA). To increase interpretation objectivity and address IOA issues, Hagopian et al. created visual-inspection criteria. They reported improved IOA, alongside criteria limitations. Following this, Roane et al. modified these criteria. The current project describes the first steps toward developing a decision support system to assist in FA interpretation. Specifically, we created a computer script, written in R, designed to evaluate FA data and produce an outcome (assign function) based on the Roane et al. criteria. Average agreement between experienced human raters and the computer script outcomes was 81%. We discuss criteria limitations (e.g., vague rules), study implications, and the significance of further research on this topic.
Behavior modification, 2022 · doi:10.1177/0145445520969188