Friday, May 8, 2009
Northwest Hall (Chicago Hilton)
2:30 PM
Background:
Information from the environment reaches us over several modalities. For example, a dropped bowl is seen to break into many pieces and also heard to crash. Although information is received over different modalities, we perceive a unitary event. Newborn infants have been shown to integrate information reliably over two modalities. Spelke and Owsley (1979) have shown that 3½-month-old infants are able to associate the sound of their mother's voice with her face. The automatic integration of auditory and visual information is necessary for the development of speech and language.
Impairment in communication is one of the characteristic deficits associated with autism (American Psychiatric Association, 1994). Individuals with autism often exhibit ineffective sensory processing, and integration of information across auditory and visual modes appears ineffective (Iarocci & McDonald, 2006). Deficits in this sensory processing may be related to some of the language impairments that characterize autism. However, efforts to replicate this language-specific deficit have yielded ambiguous results. Pilot testing presented at IMFAR in 2008 demonstrated that a modification of the experimental paradigm resulted in increased sensitivity and was appropriate for use in young children with an autism spectrum disorder.
Objectives:
The proposed study will investigate the language-specific deficit in auditory-visual intermodal processing of stimuli seen in children with autism. The study will attempt to ameliorate the ambiguity seen in previous research with the addition of eye-tracking and by utilizing a more sensitive paradigm.
Methods:
The current study used an adapted version of the preferential looking design for use with children with autism and developmental disabilities aged 3-10. This involves displaying four videos on one screen, with an auditory track matched to only one of the videos. Intermodal perception (or the integration of the auditory and visual information) is considered to be present if the child shows a visual preference for the matched display. Videos contained either linguistic (person telling a story) or non-linguistic (person playing the drums or tap dancing) stimuli.
Results:
Eye movements were video recorded and analyzed using eye-tracking data based on the proportion of time spent looking in each of the four quadrants. Analysis is ongoing.
Conclusions:
A replication of previous results would corroborate and extend the notion of a language-specific deficit in intermodal processing in children with autism associated with their language difficulties.