Thursday, May 20, 2010
Franklin Hall B Level 4 (Philadelphia Marriott Downtown)
10:00 AM
Background:
Although information from the environment reaches us over several modalities, we perceive a unitary event. Newborn infants have been shown to integrate information reliably over two modalities and this automatic integration of auditory and visual information is necessary for the development of speech and language.
One real-world situation that requires the integration of auditory and visual information is the understanding of speech in noisy social settings. In this situation, the difficulty in deciphering what is being said is routinely referred to as the cocktail party problem (Cherry, 1953).
Individuals with autism often exhibit ineffective sensory processing, and integration of information across auditory and visual modes appears impaired (Iarocci & McDonald, 2006). Deficits in this sensory processing may be related to some of the language impairments that characterize autism. Further, there is limited research investigating the effects of background noise on the processing of speech in persons with an Autism Spectrum Disorder (ASD).
Objectives:
The objectives of the current study are twofold. First, to distinguish if previously identified deficits in auditory-visual integration are specific to linguistic information. Second, to understand the impact of increasing levels of background noise on speech intelligibility.
Methods:
Fourteen children with an ASD were matched to fourteen typically developing (TD) children based on chronological age, verbal and non-verbal abilities. The present study used eye-tracking with a preferential looking design which involved displaying four identical videos, offset in time, with an auditory track synchronous to only one of the videos. Videos contained either linguistic (person telling a story) or non-linguistic (hand playing a piano) stimuli. Background noise was added to a portion of trials and the signal to noise ratio (SNR) was manipulated.
Results:
For the conditions with no background noise, group membership predicted performance for the linguistic trials only. The TD participants were more likely to show a preference for the synchronous screen compared to the ASD participants. Group membership did not predict performance in the non-linguistic trials.
For the conditions with added background noise, rates of preferential looking decreased as the SNR increased for the TD participants only. There was no trend found for the ASD group.
Conclusions:
Typically developing children show enhanced perception for speech stimuli compared with the ASD group. These results suggest that 1) some features of intermodal perception are intact in individuals with ASD, and 2) the ASD group is impaired in some intersensory process(es) advantageous for language processing.
The addition of background noise essentially equated performance for the two groups, suggesting the possibility that intersensory information is already “noisy” or degraded for children with ASD. Implications of these findings are discussed.