Note: Most Internet Explorer 8 users encounter issues playing the presentation videos. Please update your browser or use a different one if available.

Multisensory Integration of Audiovisual Speech in Noise in Autism Spectrum Disorders

Friday, 3 May 2013: 09:00-13:00
Banquet Hall (Kursaal Centre)
10:00
M. T. Wallace1, A. A. Shuster1, J. K. Siemann1, T. Woynaroski2, S. E. Greenberg1, S. M. Camarata3 and R. A. Stevenson4, (1)Vanderbilt University, Nashville, TN, (2)Vanderbilt University, Thompsons Stn, TN, (3)Vanderbilt University Kennedy Center, Nashville, TN, (4)Vanderbilt University Medical Center, Nashville, TN
Background: Individuals with ASD exhibit atypical sensory processing spanning multiple sensory modalities (Iarocci 2006), including deficits in combining information across modalities (i.e., multisensory integration). Integrating sensory inputs into a single, unified percept is an important ethological process, leading to increases in perceptual accuracy, decreased response times, and increased rates of detection. These behavioral gains are particularly apparent when the signals are embedded in noise; the lower the signal-to-noise ratio (SNR), the greater the multisensory gain. Deficits in the realm of audiovisual speech integration have been shown in ASD (Bebko 2006, Kwakye 2011), which may be related to deficits in language and communication.

Objectives: The current experiment was designed to address the following questions:

1) Relative to matched controls, how well do individuals with ASD benefit in auditory comprehension from the addition of visual speech (i.e. being able to see the speakers mouth articulations)?
2) Is the impact of visual speech apparent at the level of whole words, or can benefits also be observed at the phonemic level.

Methods: Children with (N=12) and without ASD (N=12) were presented with single-word, tri-syllabic speech recordings in three sensory modalities: audio-only, visual-only, and audiovisual. All presentations were embedded in eight-speaker, multi-talker babble at an SNR of -6 dB SPL. Participants were given the option of identifying the word said either verbally or non-verbally (typed on a keyboard). Each response was scored on two measures:

1) Word recognition-Was the word correctly identified?
2) Phoneme recognition-What proportion of the three phonemes making up the words were accurately identified?

Finally, a McGurk paradigm was conducted, presenting unisensory and congruent audiovisual “ba” and “ga” utterances and a McGurk presentation of a visual “ga” paired with an auditory “ba.” In the McGurk condition, the percept of “da” or “tha” is frequently reported, indicating integration.

Results: Individuals with TD’s recognition of audiovisual words in noise was significantly greater than their auditory-only scores (23% gain, p<0.001). Individuals with ASD showed less benefit (11% gain, n.s.), with the addition of visual speech than did their TD counterparts (p<0.001). On the phonemic level, this pattern persisted, with TD individuals showing 34% gain (p<0.001) and ASD participants showing only a 1% difference (n.s.), and a significant difference between the two groups (p<0.001). The McGurk effect also showed greater integration in the TD group than the ASD group (79% relative to 61%, p<0.05). 

Conclusions: Our everyday world is noisy, and an impaired ability to use multisensory cues to enhance the efficiency of linguistic and cognitive processing may have a significant impact on one’s ability to interact with the world. Specifically, weaknesses in the ability of individuals with ASD to significantly bind auditory and visual speech signals may impact their language and communication abilities. Ongoing data collection with this study at other SNRs (-18 to 0 dB SPL) may provide further insights into possible differential impacts that noisy environments have on individuals with ASD

| More