Assessment & Research

Machine learning classification of autism spectrum disorder based on reciprocity in naturalistic social interactions.

Koehler et al. (2024) · Translational Psychiatry 2024
★ The Verdict

A simple webcam can read facial back-and-forth and spot autistic adults with 79% accuracy, giving clinics a quick, low-cost screen until sharper models arrive.

✓ Read this if BCBAs doing adult intakes in outpatient or day-program settings.
✗ Skip if Clinicians who only serve toddlers or lack video consent protocols.

01Research in Context

01

What this study did

Koehler et al. (2024) fed short clips of adults chatting into a computer vision program. The code tracked tiny facial movements and how often each person copied the other's expressions.

The goal was to see if this back-and-forth, called reciprocity, could spot who had autism without a long test.

02

What they found

The model got it right about four times out of five. It labeled autistic adults with 79% balanced accuracy just from facial give-and-take during a short conversation.

No extra sensors, no special room—just a webcam and free software.

03

How this fits with other research

Jabbar et al. (2026) later pushed the bar higher. Their CNN-GRU net reached 93% accuracy by looking at hand-flapping and spinning instead of faces, so the newer tech now tops the 2024 score.

Rybner et al. (2022) warns voice-only models fall apart when you switch languages. The face route looks safer for now, because video cues stay mostly the same across cultures.

Christensen et al. (2024) shows human clinicians also lean on reciprocity in the first five minutes. The machine and the trained eye are watching the same thing—just one does it faster and never gets tired.

04

Why it matters

You can try a free version of this code on a tablet in your waiting room. A five-minute chat while parents fill out forms could flag adults who need a full ADOS, saving clinician time and catching missed cases. Until Jabbar's 93% model moves out of the lab, 79% accuracy from a webcam is still better than no screen at all.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Record a five-minute conversation, run the open-source reciprocity script, and add the print-out to your intake folder for the doctor to review.

02At a glance

Intervention
not applicable
Design
other
Sample size
88
Population
autism spectrum disorder, neurotypical
Finding
positive
Magnitude
medium

03Original abstract

Autism spectrum disorder is characterized by impaired social communication and interaction. As a neurodevelopmental disorder typically diagnosed during childhood, diagnosis in adulthood is preceded by a resource-heavy clinical assessment period. The ongoing developments in digital phenotyping give rise to novel opportunities within the screening and diagnostic process. Our aim was to quantify multiple non-verbal social interaction characteristics in autism and build diagnostic classification models independent of clinical ratings. We analyzed videos of naturalistic social interactions in a sample including 28 autistic and 60 non-autistic adults paired in dyads and engaging in two conversational tasks. We used existing open-source computer vision algorithms for objective annotation to extract information based on the synchrony of movement and facial expression. These were subsequently used as features in a support vector machine learning model to predict whether an individual was part of an autistic or non-autistic interaction dyad. The two prediction models based on reciprocal adaptation in facial movements, as well as individual amounts of head and body motion and facial expressiveness showed the highest precision (balanced accuracies: 79.5% and 68.8%, respectively), followed by models based on reciprocal coordination of head (balanced accuracy: 62.1%) and body (balanced accuracy: 56.7%) motion, as well as intrapersonal coordination processes (balanced accuracy: 44.2%). Combinations of these models did not increase overall predictive performance. Our work highlights the distinctive nature of non-verbal behavior in autism and its utility for digital phenotyping-based classification. Future research needs to both explore the performance of different prediction algorithms to reveal underlying mechanisms and interactions, as well as investigate the prospective generalizability and robustness of these algorithms in routine clinical care.

Translational Psychiatry, 2024 · doi:10.1038/s41398-024-02802-5