Assessment & Research

Reliably quantifying the severity of social symptoms in children with autism using ASDSpeech.

Eni et al. (2025) · Translational Psychiatry 2025
★ The Verdict

A free AI can estimate ADOS-2 social scores from brief speech clips of autistic children.

✓ Read this if BCBAs who run intake or re-eval clinics and need quick severity checks.
✗ Skip if Clinicians working with non-speaking clients or adults.

01Research in Context

01

What this study did

Eni et al. (2025) built a free AI tool called ASDSpeech. It listens to short speech clips from autistic kids and guesses their ADOS-2 social-affect score.

The team tested the tool twice to see if the guesses stayed stable over time.

02

What they found

The AI estimates lined up well with real ADOS-2 totals at both time points. A machine can now rate social symptom severity from a simple voice recording.

03

How this fits with other research

Boorom et al. (2022) also used automated sound tools. They found parent-child talk in ASD sounds more rigid, matching the idea that timing matters.

Kissine et al. (2019) and Lau et al. (2023) show autistic speakers have unique acoustic signatures. Eni’s work extends these small studies into a practical severity scale.

Burrows et al. (2018) meta-analysis found autistic facial expressions are fewer and shorter. Together the papers suggest both face and voice carry measurable social signals.

04

Why it matters

You can record a child for one minute and get an objective severity snapshot. Use it to track change between full ADOS-2 administrations or to flag kids who need deeper screening. The tool is open-source, so no new hardware or budget is required.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Record each child’s greeting during arrival and run the clip through ASDSpeech to watch for big score jumps.

02At a glance

Intervention
not applicable
Design
other
Sample size
197
Population
autism spectrum disorder
Finding
positive
Magnitude
medium

03Original abstract

Several studies have demonstrated that the severity of social communication problems, a core symptom of Autism Spectrum Disorder (ASD), is correlated with specific speech characteristics of ASD individuals. This suggests that it may be possible to develop speech analysis algorithms that can quantify ASD symptom severity from speech recordings in a direct and objective manner. Here we demonstrate the utility of a new open-source AI algorithm, ASDSpeech, which can analyze speech recordings of ASD children and reliably quantify their social communication difficulties across multiple developmental timepoints. The algorithm was trained and tested on the largest ASD speech dataset available to date, which contained 99,193 vocalizations from 197 ASD children recorded in 258 Autism Diagnostic Observation Schedule, Second edition (ADOS-2) assessments. ASDSpeech was trained with acoustic and conversational features extracted from the speech recordings of 136 children, who participated in a single ADOS-2 assessment, and tested with independent recordings of 61 additional children who completed two ADOS-2 assessments, separated by 1–2 years. Estimated total ADOS-2 scores in the test set were significantly correlated with actual scores when examining either the first (r(59) = 0.544, P < 0.0001) or second (r(59) = 0.605, P < 0.0001) assessment. Separate estimation of social communication and restricted and repetitive behavior symptoms revealed that ASDSpeech was particularly accurate at estimating social communication symptoms (i.e., ADOS-2 social affect scores). These results demonstrate the potential utility of ASDSpeech for enhancing basic and clinical ASD research as well as clinical management. We openly share both algorithm and speech feature dataset for use and further development by the community.

Translational Psychiatry, 2025 · doi:10.1038/s41398-025-03233-6