Friday, May 21, 2010
Franklin Hall B Level 4 (Philadelphia Marriott Downtown)
11:00 AM
Background:
During the typical development of a child, speech and communication skills appear to unfold effortlessly. For many children with ASD these basic skills remain a lifelong struggle. Without treatment, speech skills, as well as other forms of interpersonal interaction, may be substantially impaired (Lovaas, 1977), leaving individuals with limited means of communicating basic wants and needs. Computer based visualization systems have provided users with new and faster ways to understand large quantities of complex data (Heer 2007). Work on Awareness Displays showed the benefit of abstract representations and their ability to provide continuous streams of data to users about their world. Visualizations have also impacted teaching of complex concepts (Leung 2006) through visual and interactive representations. Our approach to visualizing audio departs from traditional waveform or spectrograp by using a simpler perspective. With perfect speech recognition out of technical reach, much can be learned by looking at the basic vocal parameters: volume, pitch, rate of speech, syllables, and history. Resarch on abstract graphical representation of voice and children with ASD have been promising (Hailpern 2009), though focusing on production not word formation/construction.
Objectives:
We are developing software that provides real-time visual feedback in response to vocal pitch, loudness, duration, and syllables. Such a re-interpretation of voice will allow for both a new understanding of one's vocalization by audio/visual feedback and will also allow for a tangible comparison to models presented by a clinician. In other words, a child with ASD will be able to both see and hear word models by a clinician, and then use real-time visualization of their own voice to assist in learning to appropriately produce multi-syllabic targets. Our interfaces reward behavior: when children produce an accurate response, they are rewarded with an audio and graphical reward.
Methods:
We will create software tools to facilitate speech acquisition through implementation of the Task Centered User Interface Design (TCUID) process (Lewis & Rieman, 1994). This process involves working with children with ASD for the full development phase, to emphasize building what the intended users demonstrate they need. In effect, the users become part of the development team. This design will help us design and develop software that accounts for the needs and strengths of 3 groups (speech delays, ASD, and typical children), and determine what aspects of the computer software are most universally understood.
Results:
This project is currently in progress. The focus of this poster will be upon the fully functional software and it's design. The experiment is slated to begin in January.
Conclusions:
The aim of this project is to develop, implement, and measure the effectiveness of software tools for facilitating multi-syllabic speech production in children with speech-language disabilities, specifically low and moderate functioning children with autism. In addition to the novel software and empirical data we plan to generate, this cross-disciplinary project will provide an innovative and important approach to developing treatment for children with speech and language disabilities.