International Meeting for Autism Research: Comparison of Child-Human and Child-Computer Interactions for Children with ASD

Comparison of Child-Human and Child-Computer Interactions for Children with ASD

Friday, May 21, 2010
Franklin Hall B Level 4 (Philadelphia Marriott Downtown)
10:00 AM
M. P. Black , Electrical Engineering, University of Southern California, Los Angeles, CA
E. Flores , USC University Center for Excellence in Developmental Disabilities at Childrens Hospital Los Angeles, Los Angeles, CA
E. Mower , Electrical Engineering, University of Southern California, Los Angeles, CA
S. Narayanan , Electrical Engineering, University of Southern California, Los Angeles, CA
M. E. Williams , Keck School of Medicine, University of Southern California, Los Angeles, CA
Background: Children with autism spectrum disorders (ASD) exhibit qualitative and quantitative differences in social communication.  Interventions targeting such differences require intensive professional time and are often expensive.  Interactive computer technology has the potential to provide a consistent and affordable way to elicit verbal and nonverbal social interaction in children with ASD.  Animated computer characters when acting as social partners are referred to as embodied conversational agents (ECA).   By adapting the design of the ECA to the specific needs of a child, ECAs could become a powerful tool for clinicians, therapists and teachers.  The ECA platform allows for detailed analysis of behavioral patterns within a standardized situation, enabling researchers to study communication approaches and changes over time.

Objectives: In this pilot study, seven children diagnosed with autism interacted with a psychologist and an ECA.  Our goal is to compare the children’s verbal and nonverbal behavior between the two conditions (child-human and child-ECA) using our proposed multimodal audio-visual coding scheme.  Our planned analyses will determine whether ECAs provide an engaging and socially stimulating interaction platform for children with ASD, and will help identify appropriate modifications to apply in future intervention studies.

Methods: Participants in the study were 7 children (6 boys and 1 girl), ranging in age from 5 to 9 years (mean age = 6.9 years).  All were recruited from the Autism Genetic Resource Exchange (AGRE) database, and had been diagnosed with autism by AGRE researchers using the ADOS and ADI-R measures.  The children each had short conversations (1-5 minutes) with both a psychologist and an ECA (separately). Their parent remained in the room throughout.  The ECA, a teenage-looking boy named “Josh,” moved his mouth and head while speaking pre-recorded utterances.  The human and computer sessions were both scripted, and the ECA’s responses were controlled by an engineer from an adjoining room.  We recorded all interactions with multiple audio-video sensors.  We developed a coding scheme to mark relevant social communicative cues of the children (e.g., prosody of speech, turn-taking in conversation, head orientation, use of pointing and other gestures, initiation of joint attention with their parent).  These codes, as well as direct quantitative measures derived from the audio and video characterizing spoken and gestural behavior, are used to compare communicative behaviors for the child-human vs. child-ECA conditions.

Results: Six of the seven subjects were verbally fluent and engaged in interactions with both the psychologist and the ECA according to the study design. Therefore, we found that the ECA we developed was effective in eliciting natural communication with verbally fluent children with ASD.  One of the subjects rarely used verbal language to communicate with the psychologist, parent, or ECA.  We are currently coding and analyzing the multimodal data and will report differences between the two dyadic conditions at the conference.

Conclusions: We will discuss our hypotheses regarding modifications in the ECA to elicit social communication from children at varying levels of functioning within the autism spectrum, and conclude with future planned experiments.  Work supported by Autism Speaks and NSF.