Face-viewing patterns predict audiovisual speech integration in autistic children.
Less mouth looking predicts weaker audiovisual speech fusion in young autistic children.
01Research in Context
What this study did
Feng et al. (2021) watched autistic and typical 4- to 7-year-olds while they heard one syllable and saw a face say another. The team tracked where each child looked on the screen.
They wanted to know if less mouth gaze predicts weaker McGurk fusion in autism.
What they found
Autistic children fused the sounds less often. Kids who spent little time on the mouth showed the weakest fusion.
The pattern held only for the autism group; typical kids fused the syllables no matter where they looked.
How this fits with other research
Whitehouse et al. (2014) first showed the McGurk illusion is smaller in autism. The new study adds eye-tracking to explain why: mouth avoidance drives the gap.
Taylor et al. (2010) saw audiovisual integration catch up by adolescence. The 2021 kids were younger, so the deficit may fade with age.
Zhao et al. (2023) also found less mouth gaze in live Chinese conversations. The two papers agree that autistic children under-sample the mouth region across cultures and tasks.
Why it matters
If a client rarely looks at your mouth, he may miss lip cues that glue sounds together. Try adding gentle mouth-pointing prompts or use larger clear-face videos during speech drills. Tracking gaze during baseline can flag kids who need extra visual supports before you start auditory discrimination work.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Place a small sticker near your mouth when saying target words and note if the child glances at it.
02At a glance
03Original abstract
Autistic children show audiovisual speech integration deficits, though the underlying mechanisms remain unclear. The present study examined how audiovisual speech integration deficits in autistic children could be affected by their looking patterns. We measured audiovisual speech integration in 26 autistic children and 26 typically developing (TD) children (4- to 7-year-old) employing the McGurk task (a videotaped speaker uttering phonemes with her eyes open or closed) and tracked their eye movements. We found that, compared with TD children, autistic children showed weaker audiovisual speech integration (i.e., the McGurk effect) in the open-eyes condition and similar audiovisual speech integration in the closed-eyes condition. Autistic children viewed the speaker's mouth less in non-McGurk trials than in McGurk trials in both conditions. Importantly, autistic children's weaker audiovisual speech integration could be predicted by their reduced mouth-looking time. The present study indicated that atypical face-viewing patterns could serve as one of the cognitive mechanisms of audiovisual speech integration deficits in autistic children. LAY SUMMARY: McGurk effect occurs when the visual part of a phoneme (e.g., "ga") and the auditory part of another phoneme (e.g., "ba") uttered by a speaker were integrated into a fused perception (e.g., "da"). The present study examined how McGurk effect in autistic children could be affected by their looking patterns for the speaker's face. We found that less looking time for the speaker's mouth in autistic children could predict weaker McGurk effect. As McGurk effect manifests audiovisual speech integration, our findings imply that we could improve audiovisual speech integration in autistic children by directing them to look at the speaker's mouth in future intervention.
Autism research : official journal of the International Society for Autism Research, 2021 · doi:10.1002/aur.2598