Note: Most Internet Explorer 8 users encounter issues playing the presentation videos. Please update your browser or use a different one if available.

The Ability to Integrate Audiovisual Speech At 8 Months of Age Is Associated with Later Receptive Language

Saturday, 4 May 2013: 09:00-13:00
Banquet Hall (Kursaal Centre)
E. Kushnerenko1, H. Ribeiro2, T. Gliga2, P. Tomalski3, T. Charman4 and M. H. Johnson2, (1)Institute for Research in Child Development, University of East London, London, United Kingdom, (2)Centre for Brain & Cognitive Development, Birkbeck, University of London, London, United Kingdom, (3)Faculty of Psychology, University of Warsaw, Warsaw, Poland, (4)Centre for Research in Autism & Education, Institute of Education, London, United Kingdom

Deficits in crossmodal integration might play a role in language and social difficulties in children with ASD (Mongillo et al., 2008). Integration of audiovisual speech information is often investigated with a McGurk paradigm, where conflicting auditory and visual inputs are presented (McGurk & MacDonald, 1976). Although the results are not always consistent across McGurk studies with ASD children, our recent study has demonstrated that infants at low risk for ASD looked longer to the incongruent audio-video displays, while the looking behaviour of infants at high-risk (with an older sibling with autism) did not differ between congruent and incongruent displays, suggesting difficulties in matching auditory and visual information (Guiraud et al., 2012). Little is known about whether audio-visual integration during infancy is consequential for vocabulary growth. 


To investigate whether the ability to match audiovisual information at 8-months is associated with later vocabulary development as assessed by Communicative Development Inventory (CDI), in infants at low and high risk for ASD.


Twenty-four low risk participants (LR) and 64 high-risk (HR) participants took part in an eye-tracking study at 8 months. At a follow-up visit at 14 months parents of 23 LR and 42 HR infants filled in the Communicative Development Inventories (Oxford-CDI). At 8 months the stimuli were presented in two preferential looking tasks: Mismatch condition with an auditory /ba/ presented with congruent lips movement on one side of the screen and lips mouthing /ga/ on the other side of the screen and Fusion condition with an auditory /ga/ presented with articulation of /ga/ on one side of the screen (congruent face) and articulation of /ba/ on the other side of the screen (incongruent face). The total fixation length was calculated off-line for each infant for areas of interest around mouth or eyes or the whole face oval using the Tobii Studio software package.


At the age of 8 months, low-risk infants tended to look longer than high-risk infants at the audiovisually mismatched face (11.2 s vs 9.53 s, p=0.06) and mouth (7.9s  vs 6.3s, p=0.08). At 14 months, LR infants had significantly higher CDI receptive vocabulary than HR infants (p=0.026). Looking time to the incongruent mouth in the mismatch condition, as a percentage of looking time to both faces in this condition, correlated significantly with CDI receptive vocabulary at the age of 14 months (r=0.264, p=0.034).


As reported previously, audiovisual mismatch condition and fusion condition are processed differently by infants and this ability matures in the second half of the first year of life (Tomalski et al., 2012). The present data demonstrate that at the age of 8 months the ability to integrate auditory and visual speech information might be predictive of later language outcome. This data is in line with the recent reports showing that attention to mouth might be indicative of subsequent language development (Kushnerenko et al., 2012; Young, Merin, Rogers, & Ozonoff, 2009).

*The BASIS Team: S.Baron-Cohen 5, P.Bolton6, K.Davies2, M.Elsabbagh7, J.Fernandes1, J. Guiraud2, K.Hudry4, G.Pasco4, L.Tucker2

5University of Cambridge, 6Institute of Psychiatry, 7McGill University

| More