Assessment & Research

Automatic Cry Analysis: Deep Learning for Screening of Autism Spectrum Disorder in Early Childhood.

Laguna et al. (2025) · Journal of autism and developmental disorders 2025
★ The Verdict

A phone app that listens to toddler cries flagged autism with 90% accuracy, offering a fast, low-cost first screen.

✓ Read this if BCBAs who run toddler assessments or work in early-intervention clinics.
✗ Skip if Practitioners serving only school-age clients or those without intake assessment duties.

01Research in Context

01

What this study did

The team recorded toddler cries and fed the sounds into a deep-learning computer.

The computer learned to tell the difference between cries from kids later diagnosed with autism and cries from typically developing kids.

No extra toys, questions, or clinic time were needed—just a short audio clip.

02

What they found

The cry model reached 90% accuracy when sorting ASD from TD toddlers.

Key sound clues were rougher, shakier cries with lower harmonic clarity—easy for a phone mic to catch.

03

How this fits with other research

Hedley et al. (2015) showed the 10-minute ADEC play screen catches 93-94% of ASD toddlers. The cry tool matches that hit rate without any play materials.

Toh et al. (2018) found the parent M-CHAT misses many ASD cases under 21 months. Cry analysis may plug that early-age gap.

MHeald et al. (2020) already used AI to track vocal stereotypy in autism sessions. Laguna et al. (2025) move the same idea upstream—using AI on cries for screening instead of on repetitive sounds for progress tracking.

04

Why it matters

You can record a cry on your phone while the family waits and get a quick risk flag before the full evaluation. No extra parent forms, no extra clinic room. If future trials hold up, this five-second step could shorten the path from first concern to diagnosis and early intervention.

Free CEUs

Want CEUs on This Topic?

The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.

Join Free →
→ Action — try this Monday

Start audio-recording brief cry samples during intake (with consent) and note any future validation studies you can join.

02At a glance

Intervention
not applicable
Design
other
Sample size
62
Population
autism spectrum disorder, neurotypical
Finding
positive
Magnitude
large

03Original abstract

PURPOSE: The objective of this study is to identify the acoustic characteristics of cries of Typically Developing (TD) and Autism Spectrum Disorder (ASD) children via Deep Learning (DL) techniques to support clinicians in the early detection of ASD. METHODS: We used an existing cry dataset that included 31 children with ASD and 31 TD children aged between 18 and 54 months. Statistical analysis was applied to find differences between groups for different voice acoustic features such as jitter, shimmer and harmonics-to-noise ratio (HNR). A DL model based on Recursive Convolutional Neural Networks (R-CNN) was developed to classify cries of ASD and TD children. RESULTS: We found a statistical significant increase in jitter and shimmer for ASD cries compared to TD, as well as a decrease in HNR for ASD cries. Additionally, the DL algorithm achieved an accuracy of 90.28% in differentiating ASD cries from TD. CONCLUSION: Empowering clinicians with automatic non-invasive Artificial Intelligence (AI) tools based on cry vocal biomarkers holds considerable promise in advancing early detection and intervention initiatives for children at risk of ASD, thereby improving their developmental trajectories.

Journal of autism and developmental disorders, 2025 · doi:10.1007/s00127-019-01674-1