Naturalistic Daylong Audio Monitoring Using LENA: Current and Potential Applications

Friday, May 18, 2012
Sheraton Hall (Sheraton Centre Toronto)
11:00 AM
J. A. Richards, D. Xu and J. Gilkerson, LENA Research Foundation, Boulder, CO
Background:  In-depth research on children’s behavior in their regular environments, critical to a comprehensive understanding of children with autism, typically incorporates audio and video sampling and transcription and is resource-intensive.  However, recent studies have demonstrated that with some limitations audio recording alone can provide richly detailed information and that such data can be obtained efficiently and unobtrusively using wearable recorders.  The LENA framework utilizes lightweight audio recorders and incorporates signal processing and pattern recognition technologies to achieve automatic information extraction and analysis of daylong recordings in the home and other environments. This massive-sampling approach provides stable, reliable and accurate macrostatistical characterizations of child and caregiver behavior and environments. Even so, current implementations constitute only first approaches; there remain numerous domains into which this technology may be extended.

Objectives:  We summarize empirical findings from current research utilizing LENA methodology with an emphasis on strengths and limitations. We discuss potential approaches for extending the utility of this technology currently being explored as well as future applications, including, e.g.: the use of multiple recorders in one environment; enhanced recognition of “key-adult” speech; identification of specific key words, music, etc.; vocal emotion detection; and the synchronization of audio data with multichannel sources of physiological and other information. We provide an interactive demonstration of the complexity of information obtainable using these technologies.

Methods:  The current LENA framework consists of a single digital recorder worn by a key-child plus processing algorithms that discriminate human speakers from other environmental sound. It generates phonetic-based characterizations to estimate adult word and child vocalization frequencies, quantify interaction patterns and identify unique acoustic features of child vocalizations shown to relate to developmental disorders such as autism. This technology has been used successfully in home, classroom and other environments with a variety of child populations (e.g., typically developing, language-delayed, hard-of-hearing, autism) to provide a deeper understanding of their impact on child development.

Results:  Beyond its use as a parent-training and feedback tool, current applications of this technology include autism screening research, home and classroom monitoring, intervention evaluation and monitoring, social interaction in geriatric populations, assessment of hearing-aid efficacy, and acoustic evaluation of classroom design. It has been utilized in collaborative studies with senior investigators representing numerous research sites across the US (e.g., IBIS Network, University of Kansas, UNC-Chapel Hill, Johns Hopkins, Omaha BoysTown, University of Chicago, University of Colorado, UCLA, JFK-Partners Denver) and the globe (Riyadh, Saudi Arabia; Shanghai, China). Preliminary work suggests this framework may be extended and enhanced by bolstering algorithmic recognition of keywords and emotional valence and by exploring multichannel approaches that incorporate multiple recorders and sensors geared toward other modalities (e.g., physiological signals such as skin conductance). As well, the development of utilities to transmit audio recordings via broadband internet for “cloud-based” processing will simplify data collection in remote areas.

Conclusions:  This demonstration illustrates current applications of naturalistic audio recording and automated analysis using existing LENA technology and introduces the potential for expansion into multichannel explorations.  Audio-based macrostatistical data can be a valuable adjunct to traditional measures.

| More