Note: Most Internet Explorer 8 users encounter issues playing the presentation videos. Please update your browser or use a different one if available.

Teaching Emotion Recognition From Facial Expressions Using a Realistic Robotic Head

Friday, 3 May 2013: 09:00-13:00
Banquet Hall (Kursaal Centre)
12:00
A. Adams and P. Robinson, Computer Laboratory, University of Cambridge, Cambridge, United Kingdom
Background:

Children with Autism Spectrum Conditions (ASCs) often have difficulty with face processing, including difficulty recognizing emotions from facial expressions.  Some children with ASC also struggle to express their own emotions through facial expressions.  Previous studies have attempted to address these issues by using a realistic robotic head in an intervention.  However, these studies have only considered a handful of emotions while research suggests that there are hundreds of different affective states.  Furthermore, none of these studies has handed explicit control of the robot over to the child with ASC.  Other research has shown that children with ASC tend to have impaired imitation abilities, and that interventions which incorporate imitation can lead to improvements in social-communicative behaviour.  

Objectives:  

To create an imitation system for children with ASC that provides three forms of facial expression interaction with the robot:
(1) observing the robot acting out various emotions,
(2) imitating the robot's facial expressions, and
(3) controlling the robot's facial expressions through mimicry.
Facial expressions produced by the robot should cover a wide range of emotions and the ultimate goal of the interaction is to help teach emotion recognition to children with ASC.

Methods:  

The imitation system takes video of a human face as input.  In each video frame, it finds and tracks 66 feature points on the human's face in real-time.  The system then translates the movement of those feature points into motor movements so that the robot's face mimics the video face in real-time.  For the observation component of the interaction, the emotions and their corresponding facial expressions are animated on the robot using the short video clips from the Mindreading DVD developed by Golan et al.  The DVD covers 412 different emotions and each emotion is acted out by six different actors.  For the imitation and control components of the interaction, a webcam is pointed at the person seated in front of the robot and the live feed from the webcam is used to produce the robot's facial expressions.

Results:

Various techniques were investigated in order to achieve a realistic mapping from the movement of facial feature points in the video to the motor movements on the robot.  The best technique (linear regression on a training set of non-rotated feature points) has been implemented in the final version of the imitation system.  A controlled intervention study has yet to be run to determine the effectiveness of this imitation system in teaching emotion recognition to children with ASC. 

Conclusions:

Our imitation system can perform facial expressions for 412 different emotions as well as real-time mimicry of facial expressions from a webcam.  The ASC-related experiments planned with the imitation system will allow the exploration of whether or not a robotic head is useful in teaching recognition of complex emotions, whether algorithms can help to coach expression imitation, and whether giving the child control over the robot has any effect on his/her engagement or anxiety levels.

| More