19975
How Multitalker Environments Affect Speech Understanding in Autism

Friday, May 15, 2015: 11:30 AM-1:30 PM
Imperial Ballroom (Grand America Hotel)
L. C. Anderson, E. J. Wood, E. Redcay and R. S. Newman, University of Maryland, College Park, MD
Background:   Individuals with ASD have well-documented language deficits, including difficulties with speech processing (Ceponiene et al., 2003; Tager-Flusberg, Paul, & Lord, 2005). One factor that may contribute to these deficits is difficulty focusing on a sound while filtering out irrelevant sounds in the environment (Teder-Sälejärvi et al., 2005). Compared to neurotypical (NT) adults, adults with ASD have trouble recognizing speech in noisy environments (Alcántara et al., 2004). Typically-developing infants and adults use the speaker’s face to help them separate streams of speech in these environments (Sumby & Pollock, 1954). Yet presentation of the speaker’s face does not facilitate perception of speech in noise for adults with ASD, as it does with NT adults (Smith & Bennetto, 2007). It is unclear at what age these differences between groups emerge. 

Objectives:  We explored (1) whether children with ASD show greater difficulties than NT children understanding familiar words with background noise present, and (2) whether children with ASD benefit from concurrent visual and auditory cues. 

Methods:  To date, participants include 13 NT children and 11 children with ASD aged 2–5. ASD diagnosis was confirmed with the ADOS-2 and SCQ, and receptive language ability was assessed with the Mullen. In a preferential-looking paradigm, participants sat on a parent’s lap viewing a screen. On each trial, two familiar objects (e.g. ball, flower) appeared while children heard audio that named the target object (e.g., “Look at the ball. Where’s the ball?”). The 24 test trials were divided into four conditions: in half of the trials a woman’s face was presented concurrently with the audio, with no face in the other trials. Half of the trials occurred with background noise (a woman reading from a book) presented concurrently; the other trials had no background noise. Visual stimuli and the location of the target object were counterbalanced across conditions (face quiet, no-face quiet, face noise, no-face noise). Looking time to the objects was coded frame-by-frame.

Results:  Averaging across conditions, NT and ASD children looked proportionally more towards the target than distractor object, suggesting that both groups understood the task (NT: t(12) = 12.65, p < 0.0001, ASD: t(10) = 2.11, p = 0.06). However, for children with ASD, the only condition with significantly longer looking to the target than distractor was face quiet (t(10) = 2.73, p = 0.02). In contrast, NT children showed a significant difference in looking time to the target vs. distractor in all four conditions (ps < 0.001). Trends toward poorer performance in noise and with no face did not reach significance with the small sample size. NT children performed better than children with ASD (i.e., looked significantly more towards the target) on both face quiet and blank quiet (ps < 0.05) trials, with face noise approaching significance (p= 0.06).

Conclusions:   Preliminary results suggest that children with ASD are most successful comprehending language in quiet environments when given both visual and auditory information. Children with ASD and NT children appear to be affected in a similar way by background noise.