Objectives: We presented preliminary data at last year’s IMFAR conference (2008) showing that children with autism performed differentially when presented with emotions and vowels in faces moving at different speeds. This year, we present data from a follow-up study which explores whether the amount and type of movement presented in emotional and non-emotional faces can determine performance in emotion recognition in the same sample of children with autism.
Methods: Children with autism aged 8 to 14 years; children with moderate learning difficulties matched to the autism group on chronological and verbal mental age; and verbally-matched typically developing children aged 4 to 7 years took part in this experiment. Children were asked to undertake an emotion recognition task, in which they had to match dynamic videos of emotions with corresponding photographs. The amount and type of movement presented in dynamic facial stimuli was manipulated. Specifically, in the first condition, videos of actors moving their facial features naturally to convey an emotional expression were presented; whereas in the second condition, the videos from the first condition were edited to comprise a snapshot effect, rather than a smooth natural movement. This snapshot effect is intended to mimic the blinking strategy that people with autism have reported to use when “their world is moving too fast”. The third and fourth conditions were the same as the first and second conditions with an added motion element: actors moved their heads from one profile to the other whilst portraying the emotions, either in a continuous movement or with a snapshot effect. The non-emotional control task consisted of silent vowel production.
Results: and Conclusions: Analyses are currently being conducted and will be ready for presentation at the IMFAR conference.