21617
Exploring Visual Social Attention in Minimally Verbal Children and Adolescents with ASD

Saturday, May 14, 2016: 3:16 PM
Room 307 (Baltimore Convention Center)
D. Plesa-Skwerer1, A. Chu2, B. Brukilacchio3 and H. Tager-Flusberg1, (1)Boston University, Boston, MA, (2)Psychological and Brain Sciences, Boston University, Boston, MA, (3)Harvard Graduate School of Education, Somerville, MA
Background: Attending preferentially to social information in the environment is important in developing socio-communicative skills and language. Research using eye-tracking to explore how individuals with autism spectrum disorders (ASD) orient to and engage attention towards various stimuli surged in the last decade (Guillon et al., 2014). However, studies rarely included nonverbal/minimally verbal individuals (MV_ASD); investigating their spontaneous viewing patterns could provide insights into the possible connections between social attention deficits and failure to acquire spoken language. 

Objectives: We used eye-tracking to examine whether distinctive patterns of attending to and processing social information differentiate minimally verbal from verbal individuals with ASD (V_ASD) when viewing naturalistic dynamic scenes.

Methods: Participants were 38 MV_ASD and 19 V_ASD children and adolescence between 5; 8 and 18 years, matched on age. The eye-tracking task (modeled after Chawarska et al., 2012) involved short video-clips showing an actor seated behind a table, making a snack or wrapping a book. Four interesting objects surrounded the actor. Areas of interest (AOIs) were defined around each object and the actor’s face, eyes and hands. The videos were divided into 6 episodes based on the actor’s behavior: most salient were 3 episodes, one showing the actor looking toward the camera addressing the viewer directly, another showing a toy-spider moving on the table and the actor’s gaze following the spider’s movement (expected gaze shift); the third showing the spider moving again, but the actor looked toward the unmoving object diagonally opposite from the toy-spider (unexpected gaze shift). The variables of interest were proportion of looking time in each AOI, by episode, relative to total time spent looking at the screen.  

Results: Relative to total movie duration both V_ASD and MV-ASD participants spent proportionally more time looking at the actor compared to the objects (71.3% vs. 28.7%  and 65% vs. 35%, respectively) and increased their attention toward the actor’s face when she started talking. However, in the segments that entailed interpreting the actor’s gaze shift toward and away from a surprising moving object, fewer MV_ASD participants showed fixations on the actor’s face (26.3% in segment 3 and  39.5% in segment 5) compared to the V_ASD participants (61.5% and 67%, respectively). Only 15.8% of the MV_ASD group followed the actor’s unexpected gaze shift, compared to 39% of the V-ASD participants who displayed a triadic pattern of visually scanning the scene, involving the toy-spider, the actor’s face/line of regard and the object toward which the actor shifted gaze.  

Conclusions: These findings underscore the need to qualify the widely-held assumption that individuals with ASD distribute attention between objects and persons in atypical ways, by examining the social-inferential challenges of the scenes viewed. The differences in visual scanning patterns found between MV_ASD and V_ASD participants reflect decreased attention to behaviors that entail inferring the underlying intentions of the actor. Consequently, minimally verbal children with ASD may be less able to learn from interactive opportunities involving joint attention, which may further impair their ability to detect and interpret social cues and affect their acquisition of language.