International Meeting for Autism Research: Signal Processing Tools for the Automatic Analysis of Child-Psychologist Interactions

Signal Processing Tools for the Automatic Analysis of Child-Psychologist Interactions

Friday, May 13, 2011
Elizabeth Ballroom E-F and Lirenta Foyer Level 2 (Manchester Grand Hyatt)
10:00 AM
M. P. Black1, D. Bone1, T. Chaspari1, A. Tsiartas1, P. Gorrindo2, M. E. Williams3, P. Levitt2 and S. S. Narayanan1, (1)Signal Analysis and Interpretation Laboratory (SAIL), University of Southern California, Los Angeles, CA, (2)Zilkha Neurogenetic Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, (3)University Center for Excellence in Developmental Disabilities at Children’s Hospital Los Angeles, University of Southern California Keck School of Medicine, Los Angeles, CA
Background:

The Autism Diagnostic Observation Schedule (ADOS) is one of the most useful clinical instruments for the diagnosis and assessment of autism spectrum disorders (ASD) for at-risk individuals with varying verbal abilities.  The semi-structured 30-60 minute interaction provides a trained psychologist with behavioral evidence that can be evaluated along dimensions pertinent towards a diagnosis of autism.  One challenge in using the ADOS (and with observational methods in general) is the subjective nature inherent to the rating system.  Moreover, many of the sub-assessments have qualitative descriptions, making the instrument less useful for population stratification.

Objectives:

Technology can assist with this process in a number of ways.  Audio-video sensors can record the child-clinician interaction, and state-of-the-art signal processing methods can facilitate quantitative data collection, analyses, and modeling using objective audio-video signals.  Behavioral cues (e.g., speech prosody, hand gestures) can be automatically estimated and quantified in a consistent fashion and could provide researchers with an orthogonal source of information.  We have two immediate goals for this work: 1) to collect a large audio-video corpus (>100 subjects) of ADOS sessions, and 2) to automatically process and analyze the data by extracting multimodal behavioral cues.  Our ultimate goal is to develop tools and signal processing algorithms to produce quantitative data of social interactions, and to support psychologists' analysis and decision capabilities.

Methods:  

We designed a portable audio-video recording set-up, consisting of two high-definition camcorders and two high-quality shotgun microphones.  All sensors operate unobtrusively within the clinical space to ensure that the experiments are ecologically valid.  Our initial analysis of the corpus will focus on automatically detecting when the subjects are speaking and automatically assessing the subjects' use of prosody (e.g., rate, rhythm, intonation).  Speech abnormalities (e.g., atypical prosody) are one of the behaviors that are coded in the ADOS for verbal children, and they are particularly difficult for human observers to quantify. 

Results:

To date, we have collected data from 51 subjects (14 module 1, 10 module 2, 25 module 3, 2 module 4; subjects' age range from 5 to 17 years, with a mean age of 9.3 years).  A detailed description of the recruited subjects and data will be provided at the conference.  We have designed an automatic voice activity detector, optimized for the acoustic conditions of the clinical space.  We will provide a statistical analysis of pitch- and intensity-related speech cues extracted from the audio signal, which relate to the perception of prosody.

Conclusions:

Incorporating quantitative computational methods in the domain of autism research and practice could lead to a more consistent assessment framework across subjects and over time.  Technology has the potential to help observational psychology research and practice by offering tools for analysis of important behavioral phenomena.  This approach can play a critical role in autism research through the collection and automatic analysis of a large corpus of ADOS interactions.  This work is supported by the National Science Foundation, Autism Speaks, and the Marino Autism Research Institute.

| More