Assessing and Training Emotion Recognition: A Comprehensive Facial Expression Training Program for Children with ASD

Thursday, May 12, 2016: 11:30 AM-1:30 PM
Hall A (Baltimore Convention Center)
S. Matsuda1,2 and J. Yamamoto3, (1)University of Tsukuba, Ibaraki, Japan, (2)Japan Society for the Promotion of Science, Tokyo, Japan, (3)Dept. of Psychology, Keio University, Tokyo, Japan
Background:   Deficits in emotion recognition have been well documented in children with autism spectrum disorders (ASD); therefore, previous studies have focused on improving emotion recognition in intervention programs (e.g., Bölte et al., 2002; Golan & Baron-Cohen, 2006; Silver & Oakes, 2001). However, these programs have been developed only in Western countries even though there are cultural differences in emotion recognition (Elfenbein & Ambady, 2002). We have developed the comprehensive facial expression training program called Face-Expression Expert Program (FEEP) over a period of several years in Japan, which is based on the Sidman’s stimulus equivalence analysis (Sidman, 1994). The program consisted of 10 stimulus-response relations. In the current series of studies, we assessed and trained emotion recognition in the children with ASD using FEEP.

Objectives:   This presentation review preliminary outcomes from the studies of the comparison between children with and without ASD, as well as effects of training. 

Methods:   The current FEEP assessment consisted of five stimulus-response relations; (1) “facial expressions – facial expressions (identical),” (2) “facial expressions – facial expressions (categorical),” (3) “words – facial expressions,” (4) “affective prosodies – facial expressions,” (5) “descriptive images – facial expressions.” 13 children with ASD and 43 typically developing (TD) children, aged 3 to 10 years, participated in the series of group comparison studies. 7 children with ASD also participated in the series of intervention studies used single-subject designs on (3) “words – facial expressions,” (4) “affective prosodies – facial expressions,” and (5) “descriptive images – facial expressions.”

Results:   Preliminary findings from assessments suggest there were no difference between children with ASD and TD in four stimulus-response relations (1) “facial expressions – facial expressions (identical),” (2) “facial expressions– facial expressions (categorical),” (3) “words – facial expressions,” (5) “descriptive images – facial expressions.” On the other hand, there was a difference between two groups in (4) “affective prosodies – facial expressions.” Results of the interventions showed that trained stimulus-response relations were established. The results of intervention for (3) “words – facial expressions” indicated symmetrical relation “facial expressions – words” emerged. The results of intervention for (4) “affective prosodies – facial expressions,” and (5) “descriptive images – facial expressions showed generalizability to untrained stimuli.

Conclusions:   The current series of studies indicated the utility of the training program, FEEP, for both assessments and training emotion recognition in children with ASD. Although data collection is still ongoing, these findings extend past work by showing that children with ASD in Japan can be taught emotion recognition. Updates will be provided from the results of the ongoing iPad software use and a multi-site study.