Assessment & Research

Use of Oculomotor Behavior to Classify Children with Autism and Typical Development: A Novel Implementation of the Machine Learning Approach.

Zhao et al. (2023) · Journal of autism and developmental disorders 2023
★ The Verdict

Seven-second eye-motion clips can spot autism with 87% accuracy in controlled settings.

✓ Read this if BCBAs who do intake screenings or work with early-identification teams.
✗ Skip if Clinicians only doing home-based therapy with no assessment role.

01Research in Context

01

What this study did

Zhao et al. (2023) filmed kids’ eyes for seven seconds while they looked around a quiet lab.

A computer then learned to tell which kids had autism and which were typical only from these tiny gaze clips.

No toys, no instructions—just raw eye motion.

02

What they found

The program picked out the kids with autism with 87% accuracy.

That is high enough to act like a quick red-flag screener, not a full diagnosis.

03

How this fits with other research

Crippa et al. (2015) did something similar years ago, but used arm reach films and hit 97% accuracy.

da Silva et al. (2025) also used eye tracking, yet in noisy nurseries they only reached AUC 0.65—much lower.

The gap makes sense: quiet labs give clean data; real-life rooms add distraction.

Vernetti et al. (2024) moved closer to real life by letting toddlers watch a live face; they still saw clear autism signals, showing the idea holds outside strict lab set-ups.

04

Why it matters

You can’t run a full eye-tracking lab in every clinic, but you can borrow the spirit.

Keep your gaze checks short, seat the child in a calm spot, and watch for unusual scanning patterns.

If the eyes look atypical, flag for a fuller screen—no extra gear needed.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Film a 10-second free-view clip during intake and note any rapid side-to-side scanning or long peripheral stares for later review.

02At a glance

Intervention
not applicable
Design
other
Sample size
39
Population
autism spectrum disorder, neurotypical
Finding
positive
Magnitude
medium

03Original abstract

This study segmented the time series of gaze behavior from nineteen children with autism spectrum disorder (ASD) and 20 children with typical development in a face-to-face conversation. A machine learning approach showed that behavior segments produced by these two groups of participants could be classified with the highest accuracy of 74.15%. These results were further used to classify children using a threshold classifier. A maximum classification accuracy of 87.18% was achieved, under the condition that a participant was considered as 'ASD' if over 46% of the child's 7-s behavior segments were classified as ASD-like behaviors. The idea of combining the behavior segmentation technique and the threshold classifier could maximally preserve participants' data, and promote the automatic screening of ASD.

Journal of autism and developmental disorders, 2023 · doi:10.1007/s10803-021-05255-7