19237
Emotion Processing in Adolescents with ASD: Using Multiple Measures and Varying Intensities

Thursday, May 14, 2015: 2:21 PM
Grand Ballroom D (Grand America Hotel)
R. Luyster1, C. A. Nelson2 and E. Auguste3, (1)Emerson College, Boston, MA, (2)Division of Developmental Medicine, Boston Children’s Hospital, Harvard Medical School, Boston, MA, (3)Mt. Holyoke College, South Hadley, MA
Background: Adolescents with ASD often have difficulty monitoring facial expressions in daily interactions.  Despite these ‘real world’ deficits, two previous studies reported no differences between children with ASD and control children in experimental measures of facial emotion processing (Hileman et al., 2011; O’Connor, Hamm & Kirk, 2005). One possible explanation for this discrepancy lies in the inconsistency between stimuli used across studies – typically prototypical, exaggerated exemplars of emotion – and the subtle expressions typically encountered in social interactions. 

Objectives: To characterize the neural and behavioral response to low- and moderate-intensity facial expressions in adolescents with and without ASD.

Methods: Preliminary analyses include 29 12-year-olds: 15 typically developing (TD) children and 14 with ASD.  Children were fitted with a high density EEG/ERP sensor nets and presented with 250 trials of facial expressions of anger, fear and happiness at 20%, 40% and 60% intensity.  The component of interest was the face-sensitive N170, measured over the left and right occipito-temporal regions. Participants completed a behavioral sorting task: the child was presented with cards, each showing an expression, and asked to sort them by emotion. Faces included the same emotions at 10% increments of intensity, from neutral to 100%.

Both peak amplitude (in microvolts) and latency (in milliseconds) of the N170 were measured. For the behavioral data, the variables of interest were “threshold” – the intensity at which the participant identified an emotion in the face (no longer identified the face as ‘neutral’) – and the number of errors made in emotion identification.

Results: [NOTE: Due to space limitations, only results pertaining to group will be elaborated.] Two 3 (emotion, within-subject) X3 (intensity, within-subject) X2 (region, within-subject) X2 (group, between-subject) repeated-measures ANOVAs were run for N170 latency and peak amplitude.  In order to correct for multiple comparisons, the Greenhouse-Geisser test was used.  Main effects of emotion (F=5.35, p= .01) and intensity (F=9.04, p=.001) were found; no main effect or interactions of group were found.  For peak amplitude, an emotion X group X region interaction was revealed (F=4.32, p=.02). Follow up analyses revealed that the peak amplitude to fear was significantly smaller over the right than left hemisphere, but only for the ASD group (t=2.27, p=.04).  No group differences were found in number of behavioral errors made, and although the groups were similar in their thresholds for anger and fear, the ASD group had a higher threshold for happy than the TD group (t=2.34, p=.03).

Conclusions: These results provide a valuable follow-up to previous studies; for the first time, research stimuli have included the low-intensity emotional expressions commonly encountered in daily interactions. These preliminary findings indicate that – even with subtle emotional expressions – response in an experimental task may be remarkably intact, though with some variability across emotion type.  This result is encouraging for clinical applications, because it suggests that there may not be a pervasive deficit in the detection of emotional expressions, but rather that the impairment lies in adapting one’s social response, an area that is a natural target for intervention.