Modality effect on contextual integration in people with Williams syndrome.
People with Williams syndrome learn faster when you give them both sound and picture at the same time.
01Research in Context
What this study did
The team worked with people who have Williams syndrome. They wanted to know if adding pictures to sounds helps them learn faster.
Each person saw or heard pairs of words and pictures. Some pairs matched, like "dog" with a dog photo. Some did not match. The researchers timed how fast each person picked the right pair.
What they found
People with Williams syndrome picked matching pairs faster and more correctly than non-matching ones. Their scores looked like those of younger typical kids at the same mental age.
The big surprise: adding pictures to sounds worked better than sound alone. Cross-modal learning beat single-mode learning.
How this fits with other research
Palikara et al. (2018) warned that almost no one studies school methods for Williams syndrome. This 2014 paper fills that gap by showing multisensory lessons help.
Greiner de Magalhães et al. (2022) later found that systematic phonics plus spelling works best for reading. Together, the two studies say: teach sounds and letters together, and add pictures when you can.
Hippolyte et al. (2025) saw weak semantic links in the same group. So keep the pictures clear and meaningful, not abstract, to shore up the weak spots their study found.
Why it matters
You now have data to back up multisensory lessons for clients with Williams syndrome. Pair voice with photos, text with video, or signs with objects. The study says these learners can link sights and sounds just fine, and they do it better than sound alone. Use that strength when you teach vocabulary, social cues, or daily living skills.
Want CEUs on This Topic?
The ABA Clubhouse has 60+ free CEUs — live every Wednesday. Ethics, supervision & clinical topics.
Join Free →Start each trial by showing the photo plus saying the word together, not voice first or picture first.
02At a glance
03Original abstract
In this study meaningful social stimuli were used as probes in a task requiring the judgment of semantic appropriateness to investigate contextual integration ability to test the ability of people with Williams syndrome (WS) to integrate information, as opposed to the use of meaningless syllables in audiovisual studies (the McGurk effect). Participants were presented with background auditory primes followed by targets that were either congruent or incongruent with the prime. Two modes of target were presented: a visual target (AV task) or an auditory target (AA task). Participants were asked to respond yes to contextually appropriate pairs and no to those that were contextually inappropriate. The congruency effect was measured as an index of successful central coherence. Similar to normally developing controls, people with WS showed shorter response latencies and greater accuracy in recognizing congruent pairs compared with incongruent pairs. Their performance did not differ from that of controls matched by mental age, but was inferior to that of controls matched by chronological age. The results revealed generalized contextual integration for auditory primes in both tasks, consistent with previous studies using visual presentation of social-related stimuli in people with WS (Hsu, 2013a, 2013c). Further demonstration of the presence of a modality effect on contextual coherence implies that cross-modal learning may be advantageous compared with unimodal learning.
Research in developmental disabilities, 2014 · doi:10.1016/j.ridd.2014.03.049