20631
Optimizing Stimulus Selection for Early Detection of ASD Based on Preferential Attention to Audiovisual Synchrony in Toddlers

Thursday, May 14, 2015: 5:30 PM-7:00 PM
Imperial Ballroom (Grand America Hotel)
A. Abraham1, A. Trubanova2, J. B. Northrup3, D. Lin4, P. Lewis5, A. Klin1, W. Jones1 and G. J. Ramsay1, (1)Marcus Autism Center, Children's Healthcare of Atlanta and Emory University School of Medicine, Atlanta, GA, (2)Psychology, Virginia Polytechnic Institute and State University, Blacksburg, VA, (3)University of Pittsburgh, Pittsburgh, PA, (4)Department of Neurology, Massachusetts General Hospital, Boston, MA, (5)Marcus Autism Center, Atlanta, GA
Background: Our previous studies of two-year-olds with ASD demonstrated increased orientation to physical contingencies relative to social contingencies in a preferential looking paradigm when physical contingencies were defined by audiovisual synchrony between sound and light while social contingencies contrasted faces and voices with moving objects. Typically developing (TD) toddlers preferentially attended to social contingencies. We previously demonstrated a likelihood-ratio-based classifier discriminating between ASD and TD toddlers using eye-tracking measures of behavioral response to social and physical contingencies based on a variety of stimulus types but did not test for the effect of clip type. 

Objectives: This study uses a permutation-based stimulus selection technique to optimize the performance of eye-tracking measures of behavioral responses to social and physical contingencies to discriminate between ASD and TD toddlers. 

Methods: Using a preferential looking task, TD and ASD toddlers (Table 1) were presented with videos of audiovisual stimuli that paired faces and geometric shapes with tones or speech. A second cohort of TD and ASD toddlers were presented with naturalistic videos of a caregiver paired with one of four toys – rocking horse, light-up toy, mobile, or train – exhibiting different types of motion synchronized with the caregiver’s speech. Using eye-tracking measures of fixation on regions of interest, the optimal classifier for discriminating ASD from TD participants was created using a likelihood ratio test. Receiver Operating Characteristics with leave-one-out cross-validation (LOOCV) evaluated the performance of the classifier. To optimize the stimulus set for maximizing the classifier performance, each possible permutation of the stimulus clips was used to train the classifier and the corresponding area under the curve (AUC) was used to identify the combination of clips with the best classification performance.   

Results: We obtained a maximum AUC = 0.968 and sensitivity and specificity of 91% using the classifier to distinguish between TD and ASD toddlers for the preferential looking task. The top ten clip combinations had an AUC value centered on 0.967 ± 0.0001. After LOOCV, the best clip combination yielded an AUC = 0.967 ± 0.004, sensitivity = 90.8 ± 0.9%, and specificity = 90.4 ± 0.9%. Using the naturalistic stimuli, we obtained a maximum AUC = 0.944 with sensitivity and specificity of 88% (Figure 1). Furthermore, the mobile toy clip comprised 50% of the clips in the best clip combination with the highest AUC indicating that the rotational motion of the mobile toy exhibited the greatest difference in perceptual salience between ASD and TD groups.

Conclusions: Unlike traditional strategies of pooling stimuli for classification analysis, this study used a permutation based stimulus selection strategy to select the video clips that maximized the AUC. This significantly improved the discrimination between ASD and TD toddlers and inherently sorted the stimuli based on clip features. In addition to maximizing classifier performance, the permutation based clip selection also revealed which type of toy motion showed the greatest difference in salience between groups.