Imputing cognitive impairment in SPARK, a large autism cohort.
Parent surveys predict low IQ in kids with autism well enough for quick research or intake triage.
01Research in Context
What this study did
The team wanted to know if parent surveys can stand in for IQ tests. They looked at the kids with autism in the SPARK cohort. Parents filled out 20-minute online forms about language, daily skills, and medical history.
A computer model called elastic-net learned which survey answers matched low IQ scores. The gold-standard IQ cutoff was FSIQ below 80. No child took a new IQ test; the model only predicted what past tests showed.
What they found
The parent survey model guessed low IQ with 88 percent accuracy. It caught 77 percent of kids who truly had low IQ and ruled out 80 percent who did not.
In plain words, eight times out of ten the survey was right. That is good enough to split kids into research groups when a real IQ test is missing.
How this fits with other research
Cohen et al. (2018) already showed parents spot early autism signs better than short clinic visits. Chang’s work keeps the parent-report tool but moves the goal from early signs to thinking skills.
Cary et al. (2024) used the same SPARK parent forms to predict social motivation. Together the two papers prove one cheap survey can feed many research questions.
Schroeder et al. (2014) warned that parent reports do not always match real-life behavior in Williams syndrome. Chang avoids that trap by checking against hard IQ scores, not everyday actions.
Why it matters
You can now sort large caseloads without hunting down old IQ files. If a parent survey flags possible low IQ, you can prioritize those kids for full assessment or tailor goals to their learning speed. The tool is free and takes minutes, not hours.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Add the 20-item SPARK survey to your intake packet; use the free online scorer to flag kids who need full IQ testing first.
02At a glance
03Original abstract
Diverse large cohorts are necessary for dissecting subtypes of autism, and intellectual disability is one of the most robust endophenotypes for analysis. However, current cognitive assessment methods are not feasible at scale. We developed five commonly used machine learning models to predict cognitive impairment (FSIQ<80 and FSIQ<70) and FSIQ scores among 521 children with autism using parent-reported online surveys in SPARK, and evaluated them in an independent set (n = 1346) with a missing data rate up to 70%. We assessed accuracy, sensitivity, and specificity by comparing predicted cognitive levels against clinical IQ data. The elastic-net model has good performance (AUC = 0.876, sensitivity = 0.772, specificity = 0.803) using 129 predictive features to impute cognitive impairment (FSIQ<80). Top-ranked predictive features included parent-reported language and cognitive levels, age at autism diagnosis, and history of services. Prediction of FSIQ<70 and FSIQ scores also showed good performance. We show cognitive levels can be imputed with high accuracy for children with autism, using commonly collected parent-reported data and standardized surveys. The current model offers a method for large-scale autism studies seeking estimates of cognitive ability when standardized psychometric testing is not feasible. LAY SUMMARY: Children with autism who have more severe learning challenges or cognitive impairment have different needs that are important to consider in research studies. When children in our study were missing standardized cognitive testing scores, we were able to use machine learning with other information to correctly "guess" when they have cognitive impairment about 80% of the time. We can use this information in research in the future to develop more appropriate treatments for children with autism and cognitive impairment.
Autism research : official journal of the International Society for Autism Research, 2022 · doi:10.1002/aur.2622