Note: Most Internet Explorer 8 users encounter issues playing the presentation videos. Please update your browser or use a different one if available.

Detection of Animated Facial Expressions in a VR Game for Individuals with Autism

Friday, 3 May 2013: 09:00-13:00
Banquet Hall (Kursaal Centre)
11:00
J. A. Crittendon1, E. Bekele2, Z. Warren1, A. Swanson3, Z. Zheng2 and N. Sarkar2, (1)Vanderbilt Kennedy Center, Nashville, TN, (2)Vanderbilt University, Nashville, TN, (3)Vanderbilt Kennedy Center; Treatment and Research Institute for Autism Spectrum Disorders (TRIAD), Nashville, TN
Background:  

Autism is characterized by atypical patterns of behaviors and impairment in social communication. Prevalence is estimated at 1 in 88 for the lifelong disorder, which translates to tremendous costs to families and to society at large. Traditional behavioral intervention is intensive and limited in availability. Virtual reality (VR) has the potential to offer useful technology-enabled systems to help fill the gap between demand and supply. In VR, users can practice real-life scenarios in simulated settings that are carefully controlled. The importance of individualizing treatment remains one of the most robust findings for treatment success. VR technology can create a sophisticated, individualized user experience by monitoring user data and making dynamic adjustment in real time.

Objectives:  

We designed a VR-based controllable program that collects performance, physiological, and eye gaze data as users complete social and daily living skills tasks. One objective is to teach users to make fast and accurate decisions about nonverbal communication during conversations. Understanding facial expressions (FEs) is a major component of nonverbal communication and is the focus of this investigation.

Methods:  

Participants included children ages 13-17 who were in either an ASD (n=10) or control group (n=10). We created 7 avatars (4 boys, 3 girls) rigged with 7 facial expressions at 4 intensity levels (low, medium, high, extreme) for a total of 28 stimuli. Avatars first made a statement and then the animation was played. Statements were presented randomly with FEs such that users could rely only on the expression to classify the emotional expression rather than intonation or content of the statement. Interactive demonstration of the program will be available during the presentation. Data collected throughout the experiment included response accuracy, response time, eye gaze, and physiology (heart rate, GSR, pupil dilation, eye blink rate), and confidence ratings for each response.

Results:  

The system successfully presented the tasks and collected the synchronized eye gaze and physiological data throughout. All participants, regardless of group, were best able to accurately identify sadness (90% accuracy both groups), followed by anger (ASD=75%, TD=70%), then disgust (ASD=73%, TD=63%), then joy (ASD=58%, TD=55%), and surprise (ASD=55%, TD=48%). Participants were least accurate on expressions of fear (ASD=43%, TD=23%) and contempt (ASD=18%, TD=15%). These data indicate children with autism named facial expressions with similar accuracy to controls. Further examination of the data, however, revealed participants in the ASD group had significantly longer response times and lower confidence ratings than did the TD control group. These group differences along with physiological and eye gaze data will be detailed.

Conclusions:  

We developed a VR-based system wherein avatars were rigged with facial expressions at varying levels of intensity. The system was able to collect response accuracy, response time, physiological data and eye gaze patterns as participants completed the task. Data analyses revealed differences in the way children with ASD process and recognize emotional expressions compared to TD peers. Results will be used to inform refinement of the adaptive VR-based multimodal social interaction system. Implications of using VR as an adjunct to human-directed intervention will be discussed.

| More