Listen up! ADHD slows spoken-word processing in adverse listening conditions: Evidence from eye movements.
ADHD slows spoken-word recognition in noise when working memory is taxed—eye-tracking reveals the hidden pause.
01Research in Context
What this study did
Lemel et al. (2023) watched young adults’ eyes while they listened to sentences in cafeteria noise.
Half the group had ADHD; half were neurotypical. The task added a memory twist: listeners had to pick out the last word while ignoring other voices.
Eye-tracking showed when each person locked onto the target word.
What they found
ADHD listeners took 140 ms longer to recognize the word and looked back and forth more often.
The extra hesitation only showed up when the noise plus memory load was high.
Standard paper tests would have missed this real-time slowdown.
How this fits with other research
Türkan et al. (2016) saw the same eye-jumping pattern in kids with ADHD during a visual change-detection game. Both studies used eye-tracking and quasi-experimental designs, pointing to a cross-age, cross-modal attention hiccup.
Capodieci et al. (2018) found that handwriting in ADHD children slowed only when a verbal memory load was added. Rony’s auditory task mirrors that result: verbal load is the trigger, not the noise itself.
Donnadieu et al. (2015) offers a twist. They saw ADHD timing problems as a 3-year developmental lag and called the finding “positive” because kids caught up. Rony’s young adults still lagged, suggesting the gap may persist longer than Sophie’s data implied.
Why it matters
If you test language with quiet one-on-one tasks, you can miss ADHD-related processing delays. Try adding low-level background chatter or a quick memory demand, then watch eye gaze or response latency. Even a 100 ms delay can snowball into missed instructions and off-task behavior.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Play a short instruction at normal volume with low cafeteria noise in the background, time the client’s response, and note any extra eye shifts before answering.
02At a glance
03Original abstract
BACKGROUND: Cognitive skills such as sustained attention, inhibition and working memory are essential for speech processing, yet are often impaired in people with ADHD. Offline measures have indicated difficulties in speech recognition on multi-talker babble (MTB) background for young adults with ADHD (yaADHD). However, to-date no study has directly tested online speech processing in adverse conditions for yaADHD. AIMS: Gauging the effects of ADHD on segregating the spoken target-word from its sound-sharing competitor, in MTB and working-memory (WM) load. METHODS AND PROCEDURES: Twenty-four yaADHD and 22 matched controls that differ in sustained attention (SA) but not in WM were asked to follow spoken instructions presented on MTB to touch a named object, while retaining one (low-load) or four (high-load) digit/s for later recall. Their eye fixations were tracked. OUTCOMES AND RESULTS: In the high-load condition, speech processing was less accurate and slowed by 140ms for yaADHD. In the low-load condition, the processing advantage shifted from early perceptual to later cognitive stages. Fixation transitions (hesitations) were inflated for yaADHD. CONCLUSIONS AND IMPLICATIONS: ADHD slows speech processing in adverse listening conditions and increases hesitation, as speech unfolds in time. These effects, detected only by online eyetracking, relate to attentional difficulties. We suggest online speech processing as a novel purview on ADHD. WHAT THIS PAPER ADDS?: We suggest speech processing in adverse listening conditions as a novel vantage point on ADHD. Successful speech recognition in noise is essential for performance across daily settings: academic, employment and social interactions. It involves several executive functions, such as inhibition and sustained attention. Impaired performance in these functions is characteristic of ADHD. However, to date there is only scant research on speech processing in ADHD. The current study is the first to investigate online speech processing as the word unfolds in time using eyetracking for young adults with ADHD (yaADHD). This method uncovered slower speech processing in multi-talker babble noise for yaADHD compared to matched controls. The performance of yaADHD indicated increased hesitation between the spoken word and sound-sharing alternatives (e.g., CANdle-CANdy). These delays and hesitations, on the single word level, could accumulate in continuous speech to significantly impair communication in ADHD, with severe implications on their quality of life and academic success. Interestingly, whereas yaADHD and controls were matched on WM standardized tests, WM load appears to affect speech processing for yaADHD more than for controls. This suggests that ADHD may lead to inefficient deployment of WM resources that may not be detected when WM is tested alone. Note that these intricate differences could not be detected using traditional offline accuracy measures, further supporting the use of eyetracking in speech tasks. Finally, communication is vital for active living and wellbeing. We suggest paying attention to speech processing in ADHD in treatment and when considering accessibility and inclusion.
Research in developmental disabilities, 2023 · doi:10.1016/j.ridd.2022.104401