Consonant Errors and Remediation in Sensorineural Hearing Loss

1978 ◽  
Vol 43 (3) ◽  
pp. 331-347 ◽  
Author(s):  
Elmer Owens

An analysis of consonant errors for hearing-impaired subjects in a multiple-choice format revealed that about 14 consonants caused most of the difficulty in consonant recognition. For a given consonant, error probability was typically lower in the initial position of the stimulus word than in the final position. When errors were made, the substitutions were limited typically to two or three other consonants, with a greater variety occurring for consonants in the final position. Substitutions tended to be the same over a wide range of pure-tone configurations. Place errors were predominant, but manner errors also occurred. In only a few instances did specific relationships occur between particular stimulus consonants and pure-tone configurations. With knowledge of the error consonants and typical substitutions, auditory recognition of consonants can be improved by programmed instruction methods. Shaping can be accomplished by a manipulation of the response foils (choices). Since it has been shown that visual recognition of consonants can also be improved, advantage can be taken of both the visual and auditory modalities in remedial procedures. Frequency of usage in the language should be considered in the ordering of consonants for retraining purposes. Work in consonant recognition should be beneficial to the hearing-impaired patient as part of a total rehabilitation program.

1974 ◽  
Vol 17 (2) ◽  
pp. 270-278 ◽  
Author(s):  
Brian E. Walden ◽  
Robert A. Prosek ◽  
Don W. Worthington

The redundancy between the auditory and visual recognition of consonants was studied in 100 hearing-impaired subjects who demonstrated a wide range of speech-discrimination abilities. Twenty English consonants, recorded in CV combination with the vowel /a/, were presented to the subjects for auditory, visual, and audiovisual identification. There was relatively little variation among subjects in the visual recognition of consonants. A measure of the expected degree of redundancy between an observer’s auditory and visual confusions among consonants was used in an effort to predict audiovisual consonant recognition ability. This redundancy measure was based on an information analysis of an observer’s auditory confusions among consonants and expressed the degree to which his auditory confusions fell within categories of visually homophenous consonants. The measure was found to have moderate predictive value in estimating an observer’s audiovisual consonant recognition score. These results suggest that the degree of redundancy between an observer’s auditory and visual confusions of speech elements is a determinant in the benefit that visual cues offer to that observer.


1981 ◽  
Vol 24 (2) ◽  
pp. 207-216 ◽  
Author(s):  
Brian E. Walden ◽  
Sue A. Erdman ◽  
Allen A. Montgomery ◽  
Daniel M. Schwartz ◽  
Robert A. Prosek

The purpose of this research was to determine some of the effects of consonant recognition training on the speech recognition performance of hearing-impaired adults. Two groups of ten subjects each received seven hours of either auditory or visual consonant recognition training, in addition to a standard two-week, group-oriented, inpatient aural rehabilitation program. A third group of fifteen subjects received the standard two-week program, but no supplementary individual consonant recognition training. An audiovisual sentence recognition test, as well as tests of auditory and visual consonant recognition, were administered both before and ibltowing training. Subjects in all three groups significantly increased in their audiovisual sentence recognition performance, but subjects receiving the individual consonant recognition training improved significantly more than subjects receiving only the standard two-week program. A significant increase in consonant recognition performance was observed in the two groups receiving the auditory or visual consonant recognition training. The data are discussed from varying statistical and clinical perspectives.


Author(s):  
Stephan Schmid

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article. From a typological perspective, the phoneme inventories of Romance languages are of medium size: For instance, most consonant systems contain between 20 and 23 phonemes. An innovation with respect to Latin is the appearance of palatal and palato-alveolar consonants such as /ɲ ʎ/ (Italian, Spanish, Portuguese), /ʃ ʒ/ (French, Portuguese), and /tʃ dʒ/ (Italian, Romanian); a few varieties (e.g., Romansh and a number of Italian dialects) also show the palatal stops /c ɟ/. Besides palatalization, a number of lenition processes (both sonorization and spirantization) have characterized the diachronic development of plosives in Western Romance languages (cf. the French word chèvre “goat” < lat. CĀPRA(M)). Diachronically, both sonorization and spirantization occurred in postvocalic position, where the latter can still be observed as an allophonic rule in present-day Spanish and Sardinian. Sonorization, on the other hand, occurs synchronically after nasals in many southern Italian dialects. The most fundamental change in the diachrony of the Romance vowel systems derives from the demise of contrastive Latin vowel quantity. However, some Raeto-Romance and northern Italo-Romance varieties have developed new quantity contrasts. Moreover, standard Italian displays allophonic vowel lengthening in open stressed syllables (e.g., /ˈka.ne/ “dog” → [ˈkaːne]. The stressed vowel systems of most Romance varieties contain either five phonemes (Spanish, Sardinian, Sicilian) or seven phonemes (Portuguese, Catalan, Italian, Romanian). Larger vowel inventories are typical of “northern Romance” and appear in dialects of Northern Italy as well as in Raeto- and Gallo-Romance languages. The most complex vowel system is found in standard French with its 16 vowel qualities, comprising the 3 rounded front vowels /y ø œ/ and the 4 nasal vowel phonemes /ɑ̃ ɔ̃ ɛ̃ œ̃/. Romance languages differ in their treatment of unstressed vowels. Whereas Spanish displays the same five vowels /i e a o u/ in both stressed and unstressed syllables (except for unstressed /u/ in word-final position), many southern Italian dialects have a considerably smaller inventory of unstressed vowels as opposed to their stressed vowels. The phonotactics of most Romance languages is strongly determined by their typological character as “syllable languages.” Indeed, the phonological word only plays a minor role as very few phonological rules or phonotactic constraints refer, for example, to the word-initial position (such as Italian consonant doubling or the distribution of rhotics in Ibero-Romance), or to the word-final position (such as obstruent devoicing in Raeto-Romance). Instead, a wide range of assimilation and lenition processes apply across word boundaries in French, Italian, and Spanish. In line with their fundamental typological nature, Romance languages tend to allow syllable structures of only moderate complexity. Inventories of syllable types are smaller than, for example, those of Germanic languages, and the segmental makeup of syllable constituents mostly follows universal preferences of sonority sequencing. Moreover, many Romance languages display a strong preference for open syllables as reflected in the token frequency of syllable types. Nevertheless, antagonistic forces aiming at profiling the prominence of stressed syllables are visible in several Romance languages as well. Within the Ibero- Romance domain, more complex syllable structures and vowel reduction processes are found in the periphery, that is, in Catalan and Portuguese. Similarly, northern Italian and Raeto-Romance dialects have experienced apocope and/or syncope of unstressed vowels, yielding marked syllable structures in terms of both constituent complexity and sonority sequencing.


1977 ◽  
Vol 20 (1) ◽  
pp. 130-145 ◽  
Author(s):  
Brian E. Walden ◽  
Robert A. Prosek ◽  
Allen A. Montgomery ◽  
Charlene K. Scherr ◽  
Carla J. Jones

Visual recognition of consonants was studied in 31 hearing-impaired adults before and after 14 hours of concentrated, individualized, speechreading training. Confusions were analyzed via a hierarchical clustering technique to derive categories of visual contrast among the consonants. Pretraining and posttraining results were compared to reveal the effects of the training program. Training caused an increase in the number of visemes consistently recognized and an increase in the percentage of within-viseme responses. Analysis of the responses made revealed that most changes in consonant recognition occurred during the first few hours of training.


Author(s):  
Amin Ebrahimi ◽  
Mohammad Ebrahim Mahdavi ◽  
Hamid Jalilvand

Background and Aim: Digits are suitable speech materials for evaluating recognition of speech-in-noise in clients with the wide range of language abilities. Farsi Auditory Recognition of Digit-in-Noise (FARDIN) test has been deve­loped and validated in learning-disabled child­ren showing dichotic listening deficit. This stu­dy was conducted for further validation of FARDIN and to survey the effects of noise type on the recognition performance in individuals with sensory-neural hearing impairment. Methods: Persian monosyllabic digits 1−10 were extracted from the audio file of FARDIN test. Ten lists were compiled using a random order of the triplets. The first five lists were mixed with multi-talker babble noise (MTBN) and the second five lists with speech-spectrum noise (SSN). Signal- to- noise ratio (SNR) var­ied from +5 to −15 in 5 dB steps. 20 normal hearing and 19 hearing-impaired individuals participated in the current study. Results: Both types of noise could differentiate the hearing loss from normal hearing. Hearing-impaired group showed weaker performance for digit recognition in MTBN and SSN and needed 4−5.6 dB higher SNR (50%), compared to the normal hearing group. MTBN was more challenging for normal hearing than SSN. Conclusion: Farsi Auditory Recognition of Digit-in-Noise is a validated test for estimating SNR (50%) in clients with hearing loss. It seems SSN is more appropriate for using as a back­ground noise for testing the performance of aud­itory recognition of digit-in-noise.   Keywords: Auditory recognition; hearing loss; speech perception in noise; digit recognition in noise


1975 ◽  
Vol 18 (2) ◽  
pp. 272-280 ◽  
Author(s):  
Brian E. Walden ◽  
Robert A. Prosek ◽  
Don W. Worthington

Auditory and audiovisual consonant recognition were studied in 98 hearing-impaired adults, who demonstrated a wide range of consonant-recognition abilities. Information transfer analysis was used to describe the performance of the subjects on the auditory and audiovisual tasks in terms of a set of articulatory features. Visual cues substantially enhanced the transmission of duration, place-of-articulation, frication, and nasality features, but had considerably less effect on transmission of the liquid-glide and voicing features. The improvement in transmission resulting from visual cues was relatively constant across a wide range of auditory performance levels.


2021 ◽  
pp. 101-107
Author(s):  
Mohammad Alshehri ◽  

Presently, a precise localization and tracking process becomes significant to enable smartphone-assisted navigation to maximize accuracy in the real-time environment. Fingerprint-based localization is the commonly available model for accomplishing effective outcomes. With this motivation, this study focuses on designing efficient smartphone-assisted indoor localization and tracking models using the glowworm swarm optimization (ILT-GSO) algorithm. The ILT-GSO algorithm involves creating a GSO algorithm based on the light-emissive characteristics of glowworms to determine the location. In addition, the Kalman filter is applied to mitigate the estimation process and update the initial position of the glowworms. A wide range of experiments was carried out, and the results are investigated in terms of distinct evaluation metrics. The simulation outcome demonstrated considerable enhancement in the real-time environment and reduced the computational complexity. The ILT-GSO algorithm has resulted in an increased localization performance with minimal error over the recent techniques.


Phonology ◽  
2018 ◽  
Vol 35 (1) ◽  
pp. 79-114 ◽  
Author(s):  
Alessandro Vietti ◽  
Birgit Alber ◽  
Barbara Vogt

In the Southern Bavarian variety of Tyrolean, laryngeal contrasts undergo a typologically interesting process of neutralisation in word-initial position. We undertake an acoustic analysis of Tyrolean stops in word-initial, word-medial intersonorant and word-final contexts, as well as in obstruent clusters, investigating the role of the acoustic parameters VOT, prevoicing, closure duration and F0 and H1–H2* on following vowels in implementing contrast, if any. Results show that stops contrast word-medially via [voice] (supported by the acoustic cues of closure duration and F0), and are neutralised completely in word-final position and in obstruent clusters. Word-initially, neutralisation is subject to inter- and intraspeaker variability, and is sensitive to place of articulation. Aspiration plays no role in implementing laryngeal contrasts in Tyrolean.


2009 ◽  
Vol 126 (5) ◽  
pp. 2683-2694 ◽  
Author(s):  
Sandeep A. Phatak ◽  
Yang-soo Yoon ◽  
David M. Gooler ◽  
Jont B. Allen

1974 ◽  
Vol 17 (2) ◽  
pp. 194-202 ◽  
Author(s):  
Norman P. Erber

A recorded list of 25 spondaic words was administered monaurally through earphones to 72 hearing-impaired children to evaluate their comprehension of “easy” speech material. The subjects ranged in age from eight to 16 years, and their average pure-tone thresholds (500-1000-2000 Hz) ranged in level from 52 to 127 dB (ANSI, 1969). Most spondee-recognition scores either were high (70 to 100* correct) or low (0 to 30% correct). The degree of overlap in thresholds between the high-scoring and the low-scoring groups differed as a function of the method used to describe the audiogram. The pure-tone average of 500-1000-2000 Hz was a good, but not perfect, predictor of spondee-recognition ability. In general, children with average pure-tone thresholds better than about 85 dB HTL (ANSI, 1969) scored high, and those with thresholds poorer than about 100 dB scored low. Spondee-recognition scores, however, could not be predicted with accuracy for children whose audiograms fell between 85 and 100 dB HTL.


Sign in / Sign up

Export Citation Format

Share Document