17254
Modeling Dynamic Mental Representations of Facial Expressions of Emotion in Autism Spectrum Disorders
Autism Spectrum Disorders (ASDs) are characterized by a deficit in understanding others’ emotions [Hobson, 1986a, JCPP, 27,321-342]. However, there is debate about whether individuals with ASD are impaired in the perception of facial expressions of emotion [Harms, 2010, Npsych Rev,20,290-322]. We addressed this issue using a unique, 4D Generative Face Grammar [GFG; Yu, et al., 2012, Comput. Graph., 36,152–162] which combines state-of-the-art computer graphics technology with robust psychophysical techniques (reverse correlation) to model the mental representations of facial expressions of emotion in individual observers.
Objectives:
To identify differences in mental representations of the six basic facial expressions of emotion between ASD and typically developed (TD) controls, with specific focus on facial muscle groups and their temporal dynamics.
Methods:
Stimulus generation and task procedure are similar to those used in Jack, et.al., [2012, PNAS, 109,7241-7244]. On each trial, the GFG randomly selects a set of Action Units [AUs, Ekman & Friesen, 1978, Consulting Psychologists Press] from 41 possible AUs and six temporal parameters. By combining these parameters, a random, but physiologically plausible, facial animation is produced, rather like pulling random strings on a facial puppet (http://www.psy.gla.ac.uk/~kirstya/emotions/example_stim2.html). Ten ASD observers categorized 2400 same-race stimuli according to the 6 basic emotions (happy, surprise, fear, disgust, anger and sadness) or ‘other’ and rated intensity of the emotion on a 5-point scale. Sixty TD controls also performed the same task.
Results:
To model the mental representations of each facial expression of emotion, we correlated each observer’s responses with the AUs, intensity and temporal parameters. We derived 60 ASD facial expression models (10 observers x 6 emotions) and compared them with 360 TD models (60 observers x 6 emotions).
Spatial (i) – Similarity matrices
To identify the similarities and differences of the mental representations of facial expressions between ASD and TD observers, we computed the similarity (Hamming distance) between ASD and TD models. ASD observers showed markedly different compositions for fear, anger and sadness compared to TD observers.
Spatial (ii) – Bayesian classifiers
Using Bayesian classifiers (split half method, 1000 iterations), we computed classification of the ASD models across time and showed higher levels of inconsistency in the categorization of anger, fear and sadness within the ASD group.
Spatio-temporal
We analysed spatio-temporal dynamics of AU and intensities across time using a t-test on model intensity rating differences (“emotion gradient”) between ASD and TD groups. Results show that whereas TD observers are sensitive to intensity in the middle of the time course of the facial expression, ASD observers are not.
Conclusions:
So far, our analysis of dynamic mental representations of facial expressions of emotion in ASD show (1) differences in how fear, anger and sadness are represented in ASD compared to TD (2) classification accuracy is lower in fear, anger and sadness within ASD observers, indicating variance across ASD observers; (3) ASD observers are less sensitive to intensity in the middle of the facial expression time course. This new approach provides significant insights into emotion perception in ASD and also helps to explain inconsistencies found in previous studies.
See more of: Social Cognition and Social Behavior