Objectives: To examine the semantic integration of environmental sounds compared with the semantic integration of emotional information in faces and voices, in children with autism.
Methods: Participants were fifteen 3- to 6-year old high-functioning children with ASD and fifteen typically developing control children, matched on chronological age, developmental age, and gender. We recorded event-related potentials (ERPs) while the children observed pictures of instruments (drums, guitars) followed shortly by matching and mismatching nonverbal sounds (drum sounds, guitar sounds), and while they observed pictures of emotional faces (happy, fearful) and matching and mismatching voices (happy voice, fearful voice). Face stimuli were from the MacBrain standardised emotional expression dataset, and emotional voice stimuli (nonsense words “gopper sarla”) were shown to be accurate representations of happy and fearful emotional prosody through a comprehensive rating study involving six different emotion types with twelve typically developing adult participants. We analysed two ERP components involved in semantic and cognitive integration, the N400 and the Late Positive Component (LPC).
Results: An analysis of variance (ANOVA) including matching and hemisphere as within-subjects factors and participant group as a between-subjects factor revealed a main effect of match for the LPC component for the environmental sounds condition, whereby the amplitude of this component was larger for mismatching than for matching stimuli (p < 0.01). This match/mismatch effect was significant for the ASD group alone (p = 0.035), and exhibited a similar but non-significant trend in the control children alone (p = 0.10; match x hemisphere p = 0.09). No main effects or group interactions were observed for ANOVAs on the environmental sounds N400 component or for either the N400 or LPC component during the emotional face/voice integration condition. Furthermore, neither group exhibited any significant match/mismatch effects for the emotional face/voice integration condition.
Conclusions: From these results, we conclude that the automatic semantic integration of nonverbal, environmental sound information is intact in children with autism. Because neither group of children exhibited semantic integration effects for emotional face/voice pairs, we were unable to assess emotional integration effectively in children with autism in the current study. Future research might use emotional stimuli that come from emotional categories that are more dissimilar to one another, such as happy and disgust faces and voices.