Enhancing Developmental Language Disorder Identification with Artificial Intelligence: Development of an Explainable Screening App Using Real and Synthetic Data.
A new explainable AI app spots DLD in 7- to 10-year-olds as well as a clinician, using only three quick language games on a phone.
01Research in Context
What this study did
Researchers built an AI phone app that spots Developmental Language Disorder in 7- to 10-year-olds. Kids play three quick games: repeat a sentence, pick the right word ending, and tell a short story.
The app scores vocabulary, grammar, and sentence memory. It then shows a simple red-yellow-green risk flag plus a one-sentence reason you can share with parents.
What they found
The app agreed with expert clinicians almost every time. It caught kids who really had DLD and rarely flagged typical talkers.
Because the AI is "explainable," you see the exact words the child missed. No black box.
How this fits with other research
Marsack-Topolewski et al. (2025) hit 97-99 % accuracy for ASD with an ensemble model, while Georgiou (2025) reaches similar precision for DLD with a single light app. Both use open features, so you can trust the reason.
Ochi et al. (2024) also mined speech, but in adults and for ASD. Their 90 % accuracy is strong, yet Georgiou (2025) shows child grammar markers work even better for DLD.
Kremkow et al. (2022) warn most digital autism screeners are still "proof of concept." Georgiou (2025) jumps that gap by giving a finished, child-friendly tool ready for the clinic tomorrow.
Why it matters
Many BCBAs see kids whose problem behavior hides a language issue. A five-minute app can now tell you if DLD is part of the picture before you plan treatment. Slip it into intake, read the plain-language flag, and adjust your program or referral list on the spot.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Download the beta, run it with one client who has unclear language skills, and compare the flag to your current informal probe.
02At a glance
03Original abstract
PURPOSE: This study aims to evaluate key linguistic markers for distinguishing children with developmental language disorder (DLD) from their typically developing (TD) peers and to develop an artificial intelligence (AI)-based, explainable screening app. METHOD: Thirty children aged 7-10 (15 with DLD and 15 TD) completed a verbal assessment battery measuring vocabulary production, morphosyntactic abilities, and sentence repetition. Based on these data, a random forest classifier was trained on synthetically generated datasets to develop an online, explainable screening app. RESULTS: Bayesian analyses provided strong evidence for significant group differences across all three linguistic measures. The screening app, when validated on unseen cases, demonstrated high concordance with clinical diagnoses made by speech-language pathologists, indicating its reliability in identifying children with DLD. CONCLUSION: These findings support the diagnostic value of specific linguistic indicators in identifying DLD and demonstrate the feasibility of an AI-driven screening solution. The app's interpretability and scalability offer practical advantages for detection, particularly in under-resourced settings, by reducing subjectivity and time demands in the diagnostic process. Moreover, this study highlights the potential of synthetic data augmentation to overcome limitations associated with small clinical datasets, thereby enhancing the robustness and generalizability of AI-based screening apps.
Journal of autism and developmental disorders, 2025 · doi:10.3389/fpsyg.2017.02104