Intact within-Modal and Cross-Modal Integration of Low-Level Sensory Features in Autism Spectrum Disorder
The brain’s ability to integrate information coming from single or multiple sensory modalities is critical for perceiving the world as a unified and coherent percept. These processes, referred to as within-modal and cross-modal integration, ultimately allows us to interact with our surrounding and others in an adaptive manner. We recently demonstrated that individuals with autism spectrum disorder (ASD) do not benefit from the presence of a facilitatory temporally relevant tone during a demanding visual search task (Collignon et al., 2012), and display a reduced ability to integrate visual and auditory representations of the emotional expressions (Charbonneau et al., 2013), suggestive of a decreased multisensory gain in this population. However, controversy remains about multisensory integration in ASD, as alterations in this process were mainly observed for more complex tasks and stimuli (e.g., top-down control; linguistic or social stimuli), with putatively intact ability to integrate simple low-level information (de Boer-Schellekens, Keetels, Eussen, & Vroomen, 2013). Regarding the comparison between multisensory and unisensory integration, we know that redundancy gain (RG), which correspond to the behavioral outcomes of sensory integration, is greater for cross-modal than for within-modal targets in typically developing individuals (TD) (Girard et al., 2013). However, a direct comparison between within- and cross-modal integration has not been investigated in individuals with ASD.
The current study was designed to explore if the alteration in multisensory integration in ASD, as obtained from our previous experiments, can be generalised to within-modal and cross-modal integration of low-level non-social stimuli.
Twelve individuals diagnosed with ASD and 12 individuals in a typically developing comparison group, matched for full-scale IQ, were asked to respond as fast as possible to (1) lateralized visual or tactile targets presented alone, (2) double stimulation within the same modality (within-modal condition) or (3) double stimulation across modalities (cross-modal condition). Each combination was either delivered within the same hemi-space (spatially aligned) or in different hemi-spaces (spatially misaligned).
In contrast with previous reports, no difference was found between ASD and TD in their ability to integrate low level visual, tactile and visuo-tactile stimuli. In both groups, the multisensory gains obtained from the cross-modal conditions were greater than those obtained from combination of two visual or two tactile targets.
These results clearly demonstrate that individuals with ASD integrate low-level visual and tactile information as efficiently as TD individuals. Moreover, redundancy gain in ASD was found to be greater for cross-modal targets than for within-modal stimuli, extending for the first time to ASD the notion that estimates of the same event that are more independent produce enhanced integrative gains. Overall, these findings suggest that multisensory integration alterations that were previously reported in ASD are probably contingent of the type of information being integrated and/or the paradigm used, and could be restricted to more complex tasks involving either socially-laden information, or top-down processes during sensory integration.