Selective phonological impairment: a case of apraxia of speech

Phonology ◽  
1998 ◽  
Vol 15 (2) ◽  
pp. 143-188 ◽  
Author(s):  
Grzegorz Dogil ◽  
Jörg Mayer

The present study proposes a new interpretation of the underlying distortion in APRAXIA OF SPEECH. Apraxia of speech, in its pure form, is the only neurolinguistic syndrome for which it can be argued that phonological structure is selectively distorted.Apraxia of speech is a nosological entity in its own right which co-occurs with aphasia only occasionally. This…conviction rests on detailed descriptions of patients who have a severe and lasting disorder of speech production in the absence of any significant impairment of speech comprehension, reading or writing as well as of any significant paralysis or weakness of the speech musculature.(Lebrun 1990: 380)Based on the experimental investigation of poorly coarticulated speech of patients from two divergent languages (German and Xhosa) it is argued that apraxia of speech has to be seen as a defective implementation of phonological representations at the phonology–phonetics interface. We contend that phonological structure exhibits neither a homogeneously auditory pattern nor a motor pattern, but a complex encoding of sequences of speech sounds. Specifically, it is maintained that speech is encoded in the brain as a sequence of distinctive feature configurations. These configurations are specified with differing degrees of detail depending on the role the speech segments they underlie play in the phonological structure of a language. The transfer between phonological and phonetic representation encodes speech sounds as a sequence of vocal tract configurations. Like the distinctive feature representation, these configurations may be more or less specified. We argue that the severe and lasting disorders in speech production observed in apraxia of speech are caused by the distortion of this transfer between phonological and phonetic representation. The characteristic production deficits of apraxic patients are explained in terms of overspecification of phonetic representations.

Author(s):  
Linda Polka ◽  
Matthew Masapollo ◽  
Lucie Ménard

Purpose: Current models of speech development argue for an early link between speech production and perception in infants. Recent data show that young infants (at 4–6 months) preferentially attend to speech sounds (vowels) with infant vocal properties compared to those with adult vocal properties, suggesting the presence of special “memory banks” for one's own nascent speech-like productions. This study investigated whether the vocal resonances (formants) of the infant vocal tract are sufficient to elicit this preference and whether this perceptual bias changes with age and emerging vocal production skills. Method: We selectively manipulated the fundamental frequency ( f 0 ) of vowels synthesized with formants specifying either an infant or adult vocal tract, and then tested the effects of those manipulations on the listening preferences of infants who were slightly older than those previously tested (at 6–8 months). Results: Unlike findings with younger infants (at 4–6 months), slightly older infants in Experiment 1 displayed a robust preference for vowels with infant formants over adult formants when f 0 was matched. The strength of this preference was also positively correlated with age among infants between 4 and 8 months. In Experiment 2, this preference favoring infant over adult formants was maintained when f 0 values were modulated. Conclusions: Infants between 6 and 8 months of age displayed a robust and distinct preference for speech with resonances specifying a vocal tract that is similar in size and length to their own. This finding, together with data indicating that this preference is not present in younger infants and appears to increase with age, suggests that nascent knowledge of the motor schema of the vocal tract may play a role in shaping this perceptual bias, lending support to current models of speech development. Supplemental Material https://doi.org/10.23641/asha.17131805


2012 ◽  
Vol 107 (1) ◽  
pp. 442-447 ◽  
Author(s):  
Takayuki Ito ◽  
David J. Ostry

Interactions between auditory and somatosensory information are relevant to the neural processing of speech since speech processes and certainly speech production involves both auditory information and inputs that arise from the muscles and tissues of the vocal tract. We previously demonstrated that somatosensory inputs associated with facial skin deformation alter the perceptual processing of speech sounds. We show here that the reverse is also true, that speech sounds alter the perception of facial somatosensory inputs. As a somatosensory task, we used a robotic device to create patterns of facial skin deformation that would normally accompany speech production. We found that the perception of the facial skin deformation was altered by speech sounds in a manner that reflects the way in which auditory and somatosensory effects are linked in speech production. The modulation of orofacial somatosensory processing by auditory inputs was specific to speech and likewise to facial skin deformation. Somatosensory judgments were not affected when the skin deformation was delivered to the forearm or palm or when the facial skin deformation accompanied nonspeech sounds. The perceptual modulation that we observed in conjunction with speech sounds shows that speech sounds specifically affect neural processing in the facial somatosensory system and suggest the involvement of the somatosensory system in both the production and perceptual processing of speech.


1966 ◽  
Vol 2 (2) ◽  
pp. 135-158 ◽  
Author(s):  
Robert L. Cheng

This paper attempts to present the Mandarin phonological system after the generative fashion.1 We find it convenient to treat this part of the grammar in two components: namely, a syllable grammar and morphophonemics. The former attempts to designate the structure of the basic syllables independently of the syntactic component of the grammar. It consists of a set of P-rules to generate strings of phonemes for the basic syllables. The latter operates on sequences of these syllables with intrasyllabic information designated by the syllable grammar, and with category and intersyllabic information which can be given by the syntactic component. It consists of T-rules and gives a phonetic representation of sentences as its output.


1976 ◽  
Vol 41 (1) ◽  
pp. 23-39 ◽  
Author(s):  
Frank Parker

Distinctive feature is not a unique concept within linguistic theory. It has two distinct theoretical bases: phonemic theory and generative theory. Phonemic theory assumes a direct correspondence between distinctive features (the elements of phonemes) and the speech signal. Although this assumption can be shown to be incorrect, it seems to be the one most widely held in speech science. Generative theory, on the other hand, assumes no such direct relation and consequently can account for certain linguistic phenomena that phonemic theory cannot. This theory then seems to be preferable to phonemic theory for a featural analysis of misarticulation. However, there is a problem. Chomsky and Halle’s system (generative theory) as it stands does not deal with the link between what it conceives to be the lowest level of linguistic structure (the phonetic matrix) and speech production. Therefore, Chomsky and Halle’s distinctive features cannot be applied fruitfully to all instances of misarticulation. The discrepancy that exists between phonological structure and the speech signal must be accounted for in a theory of speech production. This can be accomplished by recognizing a production matrix below the phonetic matrix, where segments are described in terms of production features. The crucial point is that no one-to-one relationship necessarily exists between distinctive features and production features.


2005 ◽  
Vol 40 ◽  
pp. 63-78
Author(s):  
Ian S. Howard ◽  
Mark A. Huckvale

The goal of our current project is to build a system that can learn to imitate a version of a spoken utterance using an articulatory speech synthesiser. The approach is informed and inspired by knowledge of early infant speech development. Thus we expect our system to reproduce and exploit the utility of infant behaviours such as listening, vocal play, babbling and word imitation. We expect our system to develop a relationship between the sound-making capabilities of its vocal tract and the phonetic/phonological structure of imitated utterances. At the heart of our approach is the learning of an inverse model that relates acoustic and motor representations of speech. The acoustic to auditory mappings uses an auditory filter bank and a self-organizing phase of learning. The inverse model from auditory to vocal tract control parameters is estimated using a babbling phase, in which the vocal tract is essentially driven in a random manner, much like the babbling phase of speech acquisition in infants. The complete system can be used to imitate simple utterances through a direct mapping from sound to control parameters. Our initial results show that this procedure works well for sounds generated by its own voice. Further work is needed to build a phonological control level and achieve better performance with real speech.  


1988 ◽  
Vol 53 (3) ◽  
pp. 232-238 ◽  
Author(s):  
Samuel G. Fletcher

Changes in the dimensions and patterns of articulation used by three speakers to compensate for different amounts of tongue tissue excised during partial glossectomy were investigated. Place of articulation was shifted to parts of the vocal tract congruent with the speakers' surgically altered lingual morphology. Certain metrical properties of the articulatory gestures, such as width of the sibilant groove, were maintained. Intelligibility data indicated that perceptually acceptable substitute sounds could be produced by such transposed gestures.


2003 ◽  
Vol 46 (3) ◽  
pp. 689-701 ◽  
Author(s):  
Steve An Xue ◽  
Grace Jianping Hao

This investigation used a derivation of acoustic reflection (AR) technology to make cross-sectional measurements of changes due to aging in the oral and pharyngeal lumina of male and female speakers. The purpose of the study was to establish preliminary normative data for such changes and to obtain acoustic measurements of changes due to aging in the formant frequencies of selected spoken vowels and their long-term average spectra (LTAS) analysis. Thirty- eight young men and women and 38 elderly men and women were involved in the study. The oral and pharyngeal lumina of the participants were measured with AR technology, and their formant frequencies were analyzed using the Kay Elemetrics Computerized Speech Lab. The findings have delineated specific and similar patterns of aging changes in human vocal tract configurations in speakers of both genders. Namely, the oral cavity length and volume of elderly speakers increased significantly compared to their young cohorts. The total vocal tract volume of elderly speakers also showed a significant increment, whereas the total vocal tract length of elderly speakers did not differ significantly from their young cohorts. Elderly speakers of both genders also showed similar patterns of acoustic changes of speech production, that is, consistent lowering of formant frequencies (especially F1) across selected vowel productions. Although new research models are still needed to succinctly account for the speech acoustic changes of the elderly, especially for their specific patterns of human vocal tract dimensional changes, this study has innovatively applied the noninvasive and cost-effective AR technology to monitor age-related human oral and pharyngeal lumina changes that have direct consequences for speech production.


Sign in / Sign up

Export Citation Format

Share Document