Temporal synchrony and audiovisual integration of speech and object stimuli in autism.
Autistic teens lose speech cues when lips lag sound by only ~100 ms, so keep classroom A-V gear in tight sync.
01Research in Context
What this study did
Johnston et al. (2017) watched how well autistic teens could match lip movements with spoken words.
They also tested simple beeps paired with flashing lights.
The team wanted to know how wide the 'time window' is before the brain says the sights and sounds no longer fit together.
What they found
Autistic teens needed the sound and lips to line up within about 100 ms.
If the delay was bigger, they lost the match more often than typical teens.
Oddly, the same kids did fine with slightly out-of-sync beeps and flashes.
Tighter speech windows went hand-in-hand with higher autism severity scores.
How this fits with other research
Porter et al. (2008) and Iarocci et al. (2010) saw the same lip-reading weakness in younger kids, so the problem is stable across ages.
Bao et al. (2017) adds that even whole-word recognition in noise drops for autistic children, showing the issue hurts real listening.
Jean-Wehman et al. (2017) seems to disagree: their autistic group quickly adjusted to delayed speech.
The gap is only in method—Jean-Paul tested fast recalibration after many trials, while Elizabeth mapped the very first window where fusion fails.
Together they tell us autistic brains can learn timing shifts with practice, but start with a narrower gate.
Why it matters
Check the A-V sync on tablets, whiteboard videos, and Zoom before you teach.
A 150 ms lag that typical kids ignore may break comprehension for your autistic clients.
If you use video modeling, download files and play them offline to keep lips tight to sound.
For live teaching, face the student and speak at normal speed; adding extra pauses helps more than slowing your mouth.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Test the delay on your teaching iPad: play a talking-head video and listen for lip-sync mismatch; re-encode or switch apps if the sound trails by even a blink.
02At a glance
03Original abstract
BACKGROUND: Individuals with Autism Spectrum Disorders (ASD) have been shown to have multisensory integration deficits, which may lead to problems perceiving complex, multisensory environments. For example, understanding audiovisual speech requires integration of visual information from the lips and face with auditory information from the voice, and audiovisual speech integration deficits can lead to impaired understanding and comprehension. While there is strong evidence for an audiovisual speech integration impairment in ASD, it is unclear whether this impairment is due to low level perceptual processes that affect all types of audiovisual integration or if it is specific to speech processing. METHOD: Here, we measure audiovisual integration of basic speech (i.e., consonant-vowel utterances) and object stimuli (i.e., a bouncing ball) in adolescents with ASD and well-matched controls. We calculate a temporal window of integration (TWI) using each individual's ability to identify which of two videos (one temporally aligned and one misaligned) matches auditory stimuli. The TWI measures tolerance for temporal asynchrony between the auditory and visual streams, and is an important feature of audiovisual perception. RESULTS: While controls showed similar tolerance of asynchrony for the simple speech and object stimuli, individuals with ASD did not. Specifically, individuals with ASD showed less tolerance of asynchrony for speech stimuli compared to object stimuli. In individuals with ASD, decreased tolerance for asynchrony in speech stimuli was associated with higher ratings of autism symptom severity. CONCLUSIONS: These results suggest that audiovisual perception in ASD may vary for speech and object stimuli beyond what can be accounted for by stimulus complexity.
Research in autism spectrum disorders, 2017 · doi:10.1523/jneurosci.3675-12.2013