Auditory Stream Segregation in Verbal and Minimally Verbal Adolescents with Autism Spectrum Disorder

Saturday, May 16, 2015: 11:30 AM-1:30 PM
Imperial Ballroom (Grand America Hotel)
L. Wang1, S. Meyer1, E. Sussman2, H. Tager-Flusberg1 and B. Shinn-Cunningham3, (1)Boston University, Boston, MA, (2)Neuroscience, Albert Einstein College of Medicine, Bronx, NY, (3)Biomedical Engineering, Boston University, Boston, MA
Background:  Impairment in the use of language in social interactions is a hallmark feature of autism spectrum disorder (ASD). In severe cases, children with ASD are minimally verbal. However, research on the minimally verbal ASD population has been very limited. There have been no studies of auditory perception and processing in this population that may help explain their very limited language. Previous studies have shown that auditory stream segregation, the ability to segregate sounds produced by distinct sources, is impaired in Asperger’s Syndrome and high-functioning ASD, however none included minimally verbal participants.

Objectives:  Our goal was to compare auditory stream segregation ability in verbal and minimally verbal adolescents with ASD and age-matched controls.

Methods:  Participants included adolescents in three groups: 1) typical development, 2) verbal ASD, 3) minimally verbal ASD. ASD was diagnosed using ADI-R and ADOS and all the participants were assessed on a range of measures. In the experiment, participants watched a silent movie of their choice while passively listening to a traditional oddball stream, either in isolation (oddball condition) or in the presence of an interfering stream. Interfering streams were either spectrally distant from (segregated condition) or close to the oddball stream (integrated condition). In the oddball stream, the deviant stimuli differed from the standards only in intensity. The interfering streams were engineered so that the deviants were not unexpected if the two streams were heard as perceptually integrated. The event-related potentials (ERP) in response to the deviants were extracted and analyzed. Resting state EEG was also recorded for 10 seconds both at the beginning and the end of each block in the experiment. 

Results:  The P1-N1 amplitude in the oddball condition was similar across groups. Although significant group differences were present for integrated and segregated conditions, the main effect of group was not significant. We also computed the inter-trial coherence (ITC) of the ERP to measure the phase consistency across trials. No group difference in ITC was observed in the oddball condition. However, in the segregated condition, the ITC was significantly lower in the minimally verbal group than the other two groups. In addition, the power of the resting state EEG between 2 to 20 Hz was significantly higher in the minimally verbal ASD group than the other two groups. Lastly, the phase-locking value (PLV), a measure of how strongly the ERP phase locked to the input envelope, was similar across all groups, suggesting that the temporal encoding is largely intact in the ASD groups.

Conclusions:  These results demonstrate the feasibility of using EEG to measure stream segregation in a minimally verbal ASD population. The results of the ITC indicate that the cortical response to deviant sounds in the minimally verbal ASD group is normal in simple acoustical environments. However, when multiple sound streams were present simultaneously, this cortical response was substantially degraded only in the minimally verbal ASD group. This degradation was not due to impairment in the temporal encoding fidelity, but rather reflecting a deficit in auditory stream segregation ability, and an elevation of the resting state EEG power.