International Meeting for Autism Research (London, May 15-17, 2008): An ERP study of cross-modal integration of speech in typical development and autism spectrum disorder

An ERP study of cross-modal integration of speech in typical development and autism spectrum disorder

Friday, May 16, 2008
Champagne Terrace/Bordeaux (Novotel London West)
O. Megnin , Behavioural and Brain Sciences Unit, UCL Institute of Child Health, London, United Kingdom
T. Charman , Behavioural and Brain Sciences Unit, UCL Institute of Child Health, London, United Kingdom
T. Baldeweg , Developmental Cognitive Neuroscience Unit, UCL Institute of Child Health, London, United Kingdom
M. De Haan , Developmental Cognitive Neuroscience Unit, UCL Institute of Child Health, London, United Kingdom
A. Flitton , Behavioural and Brain Sciences Unit, UCL Institute of Child Health, London, United Kingdom
C. Jones , Department of Psychology and Human Development, Institute of Education, London, United Kingdom
Background:   Event-related potentials (ERPs) have previously been recorded from 16 adult participants during video presentation of words in one of four conditions: auditory-only (A), visual-only (V), audio-visual with face (AVF), and audio-visual with scrambled face (AVS).  Multi-sensory interactions were regarded as significant when [AVF – (A+V)] >0 at a single electrode for a minimum of 24ms duration.  The interaction pattern observed was spatially and temporally consistent with a faster, attenuated auditory N1 component. Bimodal stimuli may speed up and facilitate auditory processing in sensory-specific cortices, as supported by faster reaction times to the AVF target and shorter P3 peak latency.  Increased negativity was also observed at FP2 for AVF stimuli.  A significant correlation between N1 attenuation and the earlier-onset frontal negativity implies that these responses may reflect top-down modulation with lip movements (which precede auditory onset by a mean of 332ms) being used to constrain predictions about the word that is to be produced.  Importantly, the N1 attenuation and frontal negativity is not observed in the other audio-visual condition (auditory + scrambled face). 

Objectives:  Examine audio-visual integration in a group of high-functioning adolescents with autism spectrum disorder (ASD) and an age and IQ matched control group. 

Methods:  Apply the same paradigm as used previously with the addition of a visual-only scrambled face condition (VS) to allow comparison of [AVF – (A+V)] and [AVS – (A+VS)].

Results: Preliminary data suggest differences in audio-visual integration in the adolescents with ASD.

Conclusions:  There are a number of reasons why we might expect to see differences in an autistic population, including (but not limited to) findings of atypical unimodal auditory processing (e.g. Bomba & Pang, 2004), atypical unimodal visual processing, particularly with regards to face processing (e.g. McPartland et al, 2004), and multi-sensory processing differences (e.g. Bebko, Weiss, Demak, & Gomez, 2006). 
See more of: Neurophysiology Posters
See more of: Poster Presentations