21052
Emotion Recognition Patterns from Facial Expressions in Children with ASD: Results from a Cross-Modal Matching Paradigm

Thursday, May 12, 2016: 11:30 AM-1:30 PM
Hall A (Baltimore Convention Center)
O. Golan1, I. Gordon1,2, K. Fichman3 and G. Keinan3, (1)Department of Psychology, Bar-Ilan University, Ramat-Gan, Israel, (2)Yale Child Study Center, Yale University, New-Haven, CT, (3)School of Psychological Sciences, Tel-Aviv University, Tel-Aviv, Israel
Background:  

Emotion Recognition (ER) deficits are considered a core characteristic in Autism Spectrum Disorder (ASD). These deficits have been described in specific modalities (e.g., facial expressions, voices) as well as in cross modal settings. Although the clinical definitions and experimental evidence point to a global ER deficit in ASD, several studies also argue for emotion-specific deficits. These conflicting findings raise the question of the specificity of ER deficits in ASD and point to the need for further comparisons of distinct emotions. In addition, inconsistent findings regarding ER in ASD could be related to paradigm requirements as well as to sample characteristics, such as age or levels of functioning.

Objectives:  

The current study aimed to (1) Assess ER patterns of four distinct emotions: happy, angry, sad and surprisedin children with ASD, compared to typically-developing (TD) children; and (2) to examine the relative contribution of cues from several perceptual modalities to ER by utilizing a matching paradigm in which cues from 3 different modalities (verbal, vocal or facial) were presented alongside three options of facial expressions.

Methods:  

Twenty nine children with medium to high functioning level ASD (5 girls), aged 8-12, were matched on verbal mental age (PPVT) and gender to a group of 34 TD children (7 girls), aged 3-6. In the ER assessment, participants were presented with stimuli in three modalities: facial expressions (NimStim database), non-verbal emotional vocal cues (Montreal Affective Voices), or emotional verbal labels. Each target was presented alongside 3 photos of different facial emotional expressions. Participants were asked to select the facial expression that matches the emotion presented as target. Each of the four emotions tested was represented by four items, comprising three 16 item conditions: face-face, voice-face, and word-face.

Results:  

Compared to the TD group, the ASD group had lower scores across all modalities, but group differences were most pronounced in the face-face condition, followed by the word-face and voice-face conditions.

For each of the 4 emotions tested, the ASD group had lower scores than the TD group. Within group comparisons revealed that, whereas in the TD group recognition of surprise was significantly lower compared to recognition of all other emotions, that were not different from each other, in the ASD group recognition of anger and surprise was significantly poorer than recognition of sadness and happiness.

Verbal mental age had an effect on ER, but had no significant interaction with group.

Conclusions:  

Our findings demonstrate a developmental delay in the acquisition of ER skills in children with ASD. Deficits were found not only in emotions requiring mentalization, such as surprise, but also in more basic, situational emotions, such as anger. Our findings regarding the role of modalities in ER, namely that the ASD group seemed to have succeeded the most in the cross-modal (voice-face) condition, and struggled the most in the within modality (face-face) condition, challenge previous findings on cross-modal integration difficulties in ASD, and on visual compensatory mechanisms characteristic of ASD.