Although auditory processing is commonly found to be atypical in ASD, the precise nature of its relationship to language and communication difficulties in ASD is unclear. Some researchers argue that differences in low-level auditory perception have an early influence on speech perception (Oram Cardy et al 2005), contributing to language impairment in ASD. Others attribute language impairment and communicative deficits in ASD to reduced attention to speech (complex and rapidly changing social) stimuli (Ceponiene et al 2003, Dawson et al 2004, Kuhl et al 2005).
In EEG studies, auditory discrimination and attention is typically measured using an auditory oddball paradigm in which infrequent deviant sounds occur within a train of standard speech or nonspeech sounds. Previously it has been reported that the Mismatch Negativity (MMN), an automatic pre-attentive change detection response, is atypical in ASD. However, the nature of this atypicality is inconsistent across studies with reports of enhanced and diminished, early and delayed responses. A more consistent finding is that the P3a, an index of involuntary attention that follows from the MMN, is diminished in response to speech but not non-speech stimuli in ASD (Lepisto et al 2005, Ceponiene et al 2003).
MEG (magnetoencephalography) offers advantages over EEG in terms of the ability to separate responses from different sources in the brain. In a recent MEG study, Roberts et al (2011) reported delayed MMF (MEG equivalent of MMN) in children with ASD, but did not investigate the P3a.
Objectives:
This study aims to further investigate the neural basis of speech perception and attention in children with ASD using MEG. In particular, we are interested in the MMF, P3a and potential differences between brain responses in left and right hemisphere, and how these differences relate to language ability in ASD.
Methods:
We are currently testing 20 children with ASD and 20 age-matched TD controls using an auditory oddball paradigm. Brain responses to standard and deviant (pitch change) sounds are recorded using 160-channel MEG while the children watch a silent DVD. Speech and Nonspeech stimuli are carefully matched to ensure that any differential responses are not attributable to the acoustic properties of the stimuli.
Results:
Event-Related Beamforming will be used to extract the time series of brain responses to sounds measured from virtual sensors. Results of a pilot study with 4 neurotypical adults using this method indicate multiple bilateral sources in temporo-parietal regions. We will analyse differences in amplitude, and latency of event-related fields between autistic and non-autistic children, and consider individual differences in neuromagnetic responses as a function of performance on tests of language development and social communication.
Conclusions:
The results of this study will provide insights into the relationship between auditory processing and language and communication impairment in ASD. In particular, we will determine whether individual differences in language and communication impairment are best predicted by MMN or P3a responses.