International Meeting for Autism Research (May 7 - 9, 2009): Multisensory Integration of Visual and Vocal Emotional Cues in Autism

Multisensory Integration of Visual and Vocal Emotional Cues in Autism

Thursday, May 7, 2009
Northwest Hall (Chicago Hilton)
12:00 PM
K. M. Dalton , University of Wisconsin, Madison, WI
R. J. Davidson , Psychiatry & Psychology, University of Wisconsin, Madison, WI
Background: :  Adaptive social functioning requires exquisite integration of multiple environmental cues both automatically and attentively.  Deficits in multisensory integration can lead to poor or incorrect causal references in relation to linked environmental cues.  It is proposed that deficits in social/emotion processing associated with autism are related to poor multisensory integration of external emotional cues and have their basis in cortical and subcortical dysfunction in areas associated with social/emotional processes and multisensory integration. 
Objectives: The aim of this study was to investigate unisensory auditory prosody processing and multisensory integration of visual and auditory emotional cues and underlying brain activation patterns and physiological and behavioral sequela associated with these processes in autism. 
Methods: A sample of 17 male and 6 female (age:  M = 15.5, SD = 4.9) individuals with a diagnosis of autism spectrum disorder (ASD) participated in the study.  A sample of 17 male and 6 female (age: M = 13.26, SD = 3.91) of typically developing (TD) individuals served a comparison group.  Brain functional images were acquired while participants performed an event related facial emotion discrimination task.  Images of emotional human faces and audio clips of emotional voices were presented simultaneously in the MRI scanner.  The emotional expression of the face was crossed with the emotional prosody of the voice to produce 4 multisensory conditions.  Participants were asked to judge the emotional facial expression by pressing one of two buttons.  Participants were also asked to identify the emotional prosody of the voices in a separate task in the scanner.
Results: he TD group performed significantly better (M = 97.4%) on the emotional face identification task compared to the ASD group (M = 87.8%; p = .016).  The ASD group had marginally longer reaction times (RT) across all the conditions (M = 1758.1s, SD = 566.4) compared to the TD group (M = 1533.8, SD = 394.9; p = .09).  Interestingly, the ASD group also displayed significantly longer RTs to the incongruent vs. congruent trials (p = .03).  This effect was not found in the TD group.  The TD group also performed significantly better (M = 90.3%) on the emotional prosody task compared to the ASD group (M = 75.9%; p = .002).  The ASD group spent significantly more time per trial fixating the mouth region across all trials (M = 203ms, SD = 177.97) compared to the TD group (M = 107.18ms, SD = 98.87;  p=.018).  The ASD group had lower HRV during the faces plus voices task (M = 6.58, SD = 1.07) compared to the TD group (M = 7.43, SD = 0.93; p = .009).  Analyses of the brain functional data and the relationship between brain and behavioral measures are in progress and will be presented.

Conclusions: While the ASD group performed the emotional face recognition and emotional prosody recognition task above a chance level, as a whole, their performance was statistically below that of the TD group.  These performance measures may be related to brain and peripheral physiological differences in the ASD vs. TD group.

See more of: Poster I
See more of: Poster Presentations