International Meeting for Autism Research (May 7 - 9, 2009): “Who Said That?” Affective Face and Voice Matching in Adolescents with Autism

“Who Said That?” Affective Face and Voice Matching in Adolescents with Autism

Thursday, May 7, 2009
Northwest Hall (Chicago Hilton)
3:30 PM
R. B. Grossman , Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA
M. Kennedy , Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA
H. Tager-Flusberg , Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA
Background: Individuals with autism spectrum disorders (ASD) have deficits in the core areas of social interaction, communication, and repetitive/stereotyped behaviors. Studies also describe difficulties interpreting affective information from facial expressions and tone of voice (prosody), a skill requiring communicative and social competence; however, the evidence is often contradictory. Some studies using basic, strong emotions found preserved competence (Grossman, Klin, Carter, & Volkmar, 2000), while others, using more subtle emotional states described deficits in prosodic and facial affect recognition (Golan, Baron-Cohen, & Hill, 2006).

Objectives: The purpose of the present study was to bridge the gap between these conflicting data by using prosodic stimuli and facial expression contrasts that range from subtle to intense, in order to determine at which point on that continuum – if any - individuals with ASD show reduced competence in non-verbal affect recognition.

Methods: Participants were 22 adolescents with ASD and 22 typically developing (TD) peers matched on age, IQ, sex, and receptive vocabulary. We presented 8 semantically neutral sentences (e.g. “She bought a lot of soda”), spoken in two positive emotions (happy, surprise) and two negative emotions (anger, sadness) at two intensity levels (weak, strong) each, for a total of 64 stimulus sentences. Following each sentence, participants saw two static emotional faces on a computer screen and were asked to determine which of the two could have spoken the sentence. The faces had either a strong valence contrast, e.g. a sad sentence followed by a happy face (positive valence) and a sad face (negative valence), or a more subtle, within-valence contrast, such as a sad sentence followed by an angry face and a sad face, both with negative valence. The contrast of two faces with positive valence (happy and positive surprise) resulted in chance level accuracy for all participants and was not included in the final analysis.

Results: A 2 (group) by 2 (prosodic intensity) ANOVA revealed a main effect for intensity (F (1, 42) = 81, p < .001), with high intensity prosody resulting in higher accuracy, and an intensity by group interaction (F (1,42) = 5.58, p = .023) showing that the ASD group’s accuracy dropped off more sharply than the TD group’s for samples with weak prosody. A one-way ANOVA for each condition revealed a significant group difference for samples with weak intensity and the more subtle within-valence contrast (p = .007, see Table).

Group accuracy differences based on Oneway ANOVA

 

Prosody: Intensity Strong

Prosody: Intensity Weak

Face: Negative vs. Positive

No group difference (p=.998)

Trend for group difference (p=.069)

Face: Negative vs. Negative

No group difference (p=.512)

Sig. group difference (p=.007)

Conclusions: Individuals with ASD are as capable as their TD peers at matching sentence-length affective prosody to static facial expressions for basic emotions when prosodic intensity is strong. As prosodic intensity weakens, accuracy drops more sharply in individuals with ASD. When both prosodic intensity and facial expression valence contrast are subtle, adolescents with ASD are significantly less accurate at matching affective voices and faces than their TD peers.

See more of: Poster II
See more of: Poster Presentations