Note: Most Internet Explorer 8 users encounter issues playing the presentation videos. Please update your browser or use a different one if available.

Investigating and Training Gaze Control Using Eye-Tracking and Virtual Humans

Friday, 3 May 2013: 14:00-18:00
Banquet Hall (Kursaal Centre)
15:00
J. Nadel1, J. C. Martin2 and O. Grynszpan3, (1)Centre Emotion USR3246, La Salpetriere Hospital, French Centre of Scientific Research, Paris, France, (2)LIMSI, CNRS/ Université paris-Sud, Orsay, France, (3)Centre Emotion USR 3246 La Salpetriere Hospital, Université Pierre & Marie Curie/CNRS, Paris, France
Background:  The use of gaze as a communication channel is considered altered in Autism Spectrum Disorders (ASD). There are currently different bottom-up hypotheses that link the difficulties in communication and social interactions with atypicalities in social gaze. Multimedia interactions based on eye-tracking and virtual reality offer new opportunities for investigating and training gaze behavior in social contexts. In a previous study, we designed a novel system that placed participants face-to-face with a virtual human while providing real time biofeedback on the position of the participant’s gaze via eye-tracking technology. In the present study, our system displays short videos of real social interactions to test whether the previous findings remained valid in a context closer to real life settings.

Objectives:  The goal of the project presented here is to offer new tools and methods for assessing and training social gaze, based on experimental data and adequate theoretical grounding regarding people with Autism Spectrum Disorder.

Methods: The eye-tracking system that we designed enables simulating a gaze-contingent viewing window: the entire visual display is blurred in real-time, expect for a rectangular area centered on the focal point of the participant. Thirteen participants with High Functioning Autism Spectrum Disorders (HFASD) were recruited. They watched two short movie extracts that involved social interactions between three actors. In both extracts, two of the actors were behaving hypocritically towards the third actor, displaying facial expressions and gaze behaviors that contradicted what they were saying. The video extracts were shown in two different conditions: a control condition allowing free visual exploration and an experimental condition using the viewing window. After each video extract, participants were asked to explain what they had seen. Their responses were coded by two independent judges who counted the number of mentalising verbs (e.g. think, believe, know…).

Results:  The average duration of fixations was positively correlated with the ratio of mentalising verbs only when the gaze-contingent viewing window was used. This outcome confirms our previous conclusion (Grynszpan et al., 2012) that the system induces a situation where the visual behavior of participants with HFASD is closely linked to their cognitive understanding of social exchanges.

Conclusions:  This link seems highly relevant for training gaze control in relation with the interpretation of social interactions. We are currently constructing and testing an intervention based on our system.

| More