Inferring the Facial Expression From the Social Context in Children with Autism Spectrum Disorders

Friday, May 18, 2012
Sheraton Hall (Sheraton Centre Toronto)
2:00 PM
S. Matsuda and J. Yamamoto, Department of Psychology, Keio University, Tokyo, Japan
Background:

Individuals with autism have various kinds of difficulties on the cognition of facial expressions. Therefore, we need to consider comprehensive analysis for perception, conceptualization, comprehension, verbal-naming, imitation, appreciation of the situation, prosodic inference, self-other mapping for examining the cognition of faces, and facial expressions. We have developed the comprehensive face and facial expression learning support system called Face-Expression Expert System (FEEP), which is capable as an assessment tool and an intervention tool.  FEEP would establish relationships between facial expression, emotion-words, prosody, action, and descriptive sentences, covering wider developmental age.

Objectives:

In the present study, we assessed selection of facial expression in children with autism when they looked at actions between two people.

Methods:  

Ten boys (between 4 to 10 years old) diagnosed with autistic disorder or PDD-NOS participated in the study. Japanese standard scale of development was used to assess their developmental age. Their autism severity was rated by CARS (Schopler, Reichler, DeVellis, & Daly, 1980). Participants watched movie clips of interactions of a man and a woman. In each clip, the woman acted to induce the man to be emotional (eg, take his toy.) Then, the man’s face turned to be mosaic masked. Then, the clips had four types of ending; “happy,” “sad,” “angry,” and “surprised.” All clips were silent, and clip lengths were between 6 to 15 seconds. At the end of clips, four colored pictures of facial expressions were presented. Participants were required to choose the picture, which facial expression was corresponding to the man’s mosaic masked face. In order to select the correct picture, participants had to take both a woman’s action and a man’s action into account.

Results:  

Total percentages correct for the task were calculated for each participant in each emotion. The participants’ mean percentage of correct response was 73.3%. There was a significant correlation between the percentage of correct response and the participants’ developmental age, r = 0.71, p < 0.01. “Happy” was the most successfully recognized emotion (83.3%), and “sad” was the least (63.3%). Despite these findings, there was no statistically significance in types of ending, F (3, 27) = 1.54, n.s. Error patterns were examined using confusion matrix. Participants confused “angry” as “sad” the most (25.0%).

Conclusions:  

Our findings indicated that the abilities of selecting facial expression through actions between two people correlated with developmental age.

| More