ASHA 2007 Zemlin Memorial Award Lecture: The Neural Control of Speech
Abstract Speech production involves coordinated processing in many regions of the brain. To better understand these processes, our research team has designed, tested, and refined a neural network model whose components correspond to brain regions involved in speech. Babbling and imitation phases are used to train neural mappings between phonological, articulatory, auditory, and somatosensory representations. After learning, the model can produce combinations of the sounds it has learned by commanding movements of an articulatory synthesizer. Computer simulations of the model account for a wide range of experimental findings, including data on acquisition of speaking skills, articulatory kinematics, and brain activity during speech. The model is also being used to investigate speech motor disorders, such as stuttering, apraxia of speech, and ataxic dysarthria. These projects compare the effects of damage to particular regions of the model to the kinematics, acoustics, or brain activation patterns of speakers with similar damage. Finally, insights from the model are being used to guide the design of a brain-computer interface for providing prosthetic speech to profoundly paralyzed individuals.