Machine Learning to Analyze Single-Case Data: A Proof of Concept
Machine-learning software can judge your single-case graphs more accurately than the dual-criteria method you learned in grad school.
01Research in Context
What this study did
Lanovaz and his team built a computer program that reads single-case graphs.
They fed the program 1,000 fake graphs that looked like real ABA data.
Then they asked 30 BCBAs to judge the same graphs by eye.
The goal was to see if the computer could spot real effects as well as the humans.
What they found
The computer agreed with the experts a large share of the time.
It made fewer false alarms than the old dual-criteria method.
It also caught more true effects when they were really there.
In short, the machine was more accurate and more reliable than the usual rules of thumb.
How this fits with other research
Bailey et al. (2021) found the same thing in a different place.
They used machine learning on QABF forms and beat the standard checklist.
Préfontaine et al. (2024) took the idea even further.
They used ML to predict how much autistic preschoolers would improve after early ABA.
All three studies show the same pattern: computers can outdo old paper rules.
Madden et al. (2003) used matching-law math on self-injury data without any computer.
Their hand-done graphs still found real trends, so the new ML tool does not erase past work.
It just makes the job faster and less prone to human error.
Why it matters
You can start testing ML tools in your own practice today.
Upload your next single-case graph to a free ML analyzer before you write the report.
If the computer and your eye agree, you gain confidence.
If they differ, you have a clear reason to double-check the data.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Run your last three AB graphs through an open-source ML analyzer and compare the verdict to your visual call.
02At a glance
03Original abstract
Visual analysis is the most commonly used method for interpreting data from single-case designs, but levels of interrater agreement remain a concern. Although structured aids to visual analysis such as the dual-criteria (DC) method may increase interrater agreement, the accuracy of the analyses may still benefit from improvements. Thus, the purpose of our study was to (a) examine correspondence between visual analysis and models derived from different machine learning algorithms, and (b) compare the accuracy, Type I error rate and power of each of our models with those produced by the DC method. We trained our models on a previously published dataset and then conducted analyses on both nonsimulated and simulated graphs. All our models derived from machine learning algorithms matched the interpretation of the visual analysts more frequently than the DC method. Furthermore, the machine learning algorithms outperformed the DC method on accuracy, Type I error rate, and power. Our results support the somewhat unorthodox proposition that behavior analysts may use machine learning algorithms to supplement their visual analysis of single-case data, but more research is needed to examine the potential benefits and drawbacks of such an approach.
Perspectives on Behavior Science, 2020 · doi:10.1007/s40614-020-00244-0