Assessment & Research

Machine Learning for Supplementing Behavioral Assessment

Bailey et al. (2021) · Perspectives on Behavior Science 2021
★ The Verdict

Let a neural net chew on your QABF scores to pick the right FA conditions faster.

✓ Read this if BCBAs who run QABFs before full FAs in clinic or schools.
✗ Skip if Practitioners who only use interview-only FBAs and never run QABFs.

01Research in Context

01

What this study did

The team trained machine-learning models on old QABF data. They wanted better guesses about why problem behavior happens.

They tried three models: a neural net, a decision tree, and a support-vector machine. They also boosted the data set with fake but realistic cases.

02

What they found

The neural net beat the usual QABF cut-offs. It got the right function 9 times out of 10.

The other two models also won, but by smaller margins. Data augmentation helped the neural net the most.

03

How this fits with other research

Lanovaz et al. (2020) already showed ML can outscore human eyes on single-case graphs. Bailey et al. now push the same edge onto QABF surveys.

Gossou et al. (2022) got a large share accuracy with open-ended caregiver talks. The ML model here jumps to a large share, so it may replace that first interview step.

Préfontaine et al. (2024) used ML to forecast how much kids with autism will progress. Bailey flips the timeline: use ML before you even start the FA.

04

Why it matters

You still need an FA to be sure, but the ML guess can shrink the number of test conditions. Fewer conditions mean faster assessment and less problem behavior in the clinic. Export your QABF scores to a free neural-net script, run it overnight, and let the print-out guide tomorrow's FA conditions.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Feed last week's QABF totals into the free R neural-net package and test the top predicted function first in your FA.

02At a glance

Intervention
functional behavior assessment
Design
other
Population
not specified
Finding
positive

03Original abstract

The Questions About Behavioral Function (QABF) has a high degree of convergent validity, but there is still a lack of agreement between the results of the assessment and the results of experimental function analysis. Machine learning (ML) may improve the validity of assessments by using data to build a mathematical model for more accurate predictions. We used published QABF and subsequent functional analyses to train ML models to identify the function of behavior. With ML models, predictions can be made from indirect assessment results based on learning from results of past experimental functional analyses. In Experiment 1, we compared the results of five algorithms to the QABF criteria using a leave-one-out cross-validation approach. All five outperformed the QABF assessment on multilabel accuracy (i.e., percentage of predictions with the presence or absence of each function indicated correctly), but false negatives remained an issue. In Experiment 2, we augmented the data with 1,000 artificial samples to train and test an artificial neural network. The artificial network outperformed other models on all measures of accuracy. The results indicated that ML could be used to inform conditions that should be present in a functional analysis. Therefore, this study represents a proof-of-concept for the application of machine learning to functional assessment.

Perspectives on Behavior Science, 2021 · doi:10.1007/s40614-020-00273-9