Note: Most Internet Explorer 8 users encounter issues playing the presentation videos. Please update your browser or use a different one if available.

Seeing and Saying in Different Language Phenotypes

Friday, 3 May 2013: 14:00-18:00
Banquet Hall (Kursaal Centre)
C. Norbury, Royal Holloway, University of London, Egham, United Kingdom
Background:  There is great variation within the autism spectrum with regard to viewing social scenes and in providing narrative accounts of depicted events. No previous research has attempted to link how individuals with autism spectrum disorders (ASD) view the social world and how they talk about it. We predicted that both eye-movements and verbal output may be influenced by different neurocognitive phenotypes within ASD.

Objectives:  We aimed to answer these key questions: How are the eye-movement patterns of individuals with ASD influenced by (a) the language status of those individuals and (b) social and visual properties of the scenes. Are differences in eye-movement patterns associated with differences in verbal descriptions?

Methods:  In two experiments we recorded eye-movements on a Tobii-T120 eye-tracker while participants described simple cartoon events involving two characters. Participants included with autism and language impairments (ALI: n = 14 and 13); autism and language scores within normal range (ALN: n = 15 and 19) and typically developing age-matched peers (TD: n = 17 and 23). Verbal descriptions were transcribed and coded off-line, only accurate, active sentences were included in the eye-movement analysis (e.g. ‘the man is feeding the baby’). In Experiment 1, the event occurred in isolation, against a white background. In Experiment 2, the event was situated in a more complex, but contextually appropriate scene. In half of the images, the objects of highest visual salience were the scene characters, and therefore the most socially relevant. In the remaining images, the objects of highest visual salience were in the background and were not central to understanding the depicted event.

Results:  In Experiment 1, there were no significant group differences in either fixation sequences or accuracy of verbal responses. In Experiment 2, significant differences emerged in both fixation sequences and verbal responses. The ALI group exhibited significantly different fixation sequences relative to both ALN and TD peers. Similarly, the verbal descriptions of the ALI group were less accurate, reflecting more non-canonical utterances, more dysfluent utterances and more references to irrelevant scene information. However, there was a significant interaction of group and image salience such that these differences were most pronounced when the socially relevant objects were not visually salient. When social and visual salience were overlapping, group differences were attenuated. The ALN group did not differ from TD peers on any measure.

Conclusions:  Our visual world is often cluttered and deciding which aspects of a visual scene are the most relevant for attention and comment is crucial for both learning and for pragmatic development. Children with ALI appear to be more prone to distraction, especially when visually salient objects are present. These distractions adversely affect language production, resulting in more laboured and less relevant output. These findings highlight the intimate relationship between language competence and executive control and point to a therapeutic need to help children with autism and language impairments identify and attend to relevant aspects of their environment.

| More