23106
Social Scene Manipulation through Gaze-Contingent Interfaces: Towards Automated Gaze Strategy Instruction for Young Children with ASD

Saturday, May 14, 2016: 11:30 AM-1:30 PM
Hall A (Baltimore Convention Center)
Q. Wang, E. S. Kim, C. A. Wall, E. C. Barney, Y. A. Ahn, C. Foster, M. Mademtzi, M. G. Perlmutter, S. Macari, K. Chawarska and F. Shic, Yale Child Study Center, Yale University School of Medicine, New Haven, CT
Background: Eye tracking has been used to examine gaze patterns in studies of autism spectrum disorder (ASD). However most of these studies only recorded participants’ eye movement during passive viewing, used static images, and manually defined boundaries of areas of interest (AOIs). In the present study, we investigate how typically developing (TD) children and children with ASD attend to dynamic social scenes in an interactive gaze paradigm.

Objectives: 1) To construct normative gaze models of dynamic social scene viewing by translating gaze patterns from TD controls into probability heatmaps and applying this model as an implicit AOI map for each video frame of dynamic stimuli. 2) To evaluate whether the looking behavior of children with ASD can be modified to resemble the normative gaze pattern.

Methods: Each toddler viewed four categories of dynamic video stimuli: Dyadic Speech (Motherese), Body Movements, Activity with Objects, and Singing Songs. These video stimuli were separated into 5 blocks with all categories of videos represented in each block. Participants included TD children (n = 31, Mage = 37.4 ± 14.29 months) and children with ASD (n = 9, Mage = 33.4 ± 7.07 months). All TDs were assigned to the regular, non-Gaze Contingent (non-GC) viewing condition, and their data was used for normative gaze pattern. Among ASD children, five were assigned to the Gaze Contingent (GC) condition and four were assigned to the non-GC condition. Subsequently dynamic heatmaps of the normative gaze pattern were applied to construct a corresponding set of attention-redirecting videos which darkened and blurred areas where the TD children were not looking. If the participants in the GC condition looked away from normative attention areas, the next video frame switched to the attention-redirecting video with bright and sharp regions corresponding to TD’s heatmap distribution. This GC adaptive training method was designed to automatically attract the visual attention of children with ASD while they viewed the videos.

Results: In all four categories of videos children with ASD had significantly shorter looking times compared with TD children (p < 0.001). Furthermore, a linear mixed model analysis showed that children with ASD looked significantly less at Motherese dyadic speech videos compared to the other three categories of social scenes (p < 0.001); no significant difference was found in the TD group (p > 0.4). With GC adaptive training, children with ASD maintained their attention better than in the non-GC condition (p < .05). There were no significant contributions of Verbal ( p = .605) or Nonverbal DQ (p = .163) to the model.

Conclusions: The preliminary data provide support for the application of GC training as a viable option for modifying gaze behaviors in children with ASD. Given that visual attention gates learning, this paradigm offers a highly promising avenue for developing new therapeutic interventions. By improving looking strategies of children with ASD, we hope to broaden their future access to social learning opportunities during this period of great neuroplasticity.