University of Illinois, Urbana-Champaign
The goal of this work is to develop and evaluate a set of voice visualization software tools that specifically target multi-syllabic word production in children autism spectrum disorders (ASD). The use of technology has the potential to alleviate some apprehension experienced by many ASD children when interacting with humans. In addition, it can provide teachers/therapists with new techniques to complement and/or supplement their existing approaches. Our goal with this work is to develop and evaluate a set of software programs that "paint" an individual's voice on the screen, showing volume and pitch changes as well as when syllables occur. Our preliminary studies lead us to believe there is potential to create a new direction of research utilizing real-time visualizations to teach language skills. We aim to develop these software tools using Task Centered User Interface Design method, in which subjects of the target population are involved with the software development from the beginning to the end of the design phase. This approach emphasizes building not simply what engineers think is needed, but building what the intended users demonstrate they need. In effect, the users become part of the development team. Interactions with computer technology can help to motivate children with ASD and may help to eliminate some apprehension experienced by many children when interacting with humans. Finally, due to the profile of strengths and weaknesses of children with ASD, the introduction of human-computer interaction may shed new light on a field that has been almost exclusively experienced through human-to-human interaction.