The effect of hearing impairment on the production of prominences: The case of French-speaking school-aged children using cochlear implants and children with normal hearing

2018 ◽  
Vol 39 (2) ◽  
pp. 200-219 ◽  
Author(s):  
Bénédicte Grandon ◽  
Anne Vilain ◽  
Steven Gillis

This study explores the use of F0, intensity and duration in the production of two types of prominences in French: primary accent with duration as the main acoustic cue, and secondary accent with F0 and intensity as acoustic cues. These parameters were studied in 13 children using a cochlear implant (CI) and 17 children with a normal hearing (NH), aged 5 to 10 years. Words were recorded in two different tasks, word-repetition and picture-naming, to compare repetition of an audio model with spontaneous production. NH children were able to produce both types of prominences with duration on the one hand and the combination of F0 and intensity on the other hand, similar to what is described in the literature in French-speaking adults. NH children have a more stable use of prominences than CI children, who demonstrate more variability across tasks, more even-timed duration patterns and less modulation of F0 and intensity at vowel and word level than their NH peers.

2020 ◽  
Vol 10 (5) ◽  
pp. 110
Author(s):  
Kejuan Cheng ◽  
Xiaoxiang Chen

Many previous studies researched the influence of external cues on speech perception, yet little is known pertaining to the role of intrinsic cues in categorical perception of Mandarin vowels and tones by children with cochlear implants (CI). This study investigated the effects of intrinsic acoustic cues on categorical perception in children with CIs, compared to normal-hearing (NH) children. Categorical perception experiment paradigm was applied to evaluate their identification and discrimination abilities in perceiving /i/-/u/ with static intrinsic formants and Tone 1 (T1)-Tone 2 (T2) with dynamic intrinsic fundamental frequency (F0) contours. Results for the NH group showed that vowel continuum of /i/-/u/ was less categorically perceived than T1-T2 continuum with significantly wider boundary width and less alignment between the discrimination peak and the boundary position. However, a different categorical perception pattern was depicted for the CI group. Specifically, the CI group exhibited less categoricalness in both /i/-/u/ and T1-T2. It suggested that the effects of intrinsic acoustic cues on categorical perception was proved for the normal-hearing children, while not for the hearing-impaired children with cochlear implants. In conclusion, acoustically dynamic cues can facilitate categorical perception of speech in NH children, whereas this effect will be inhibited by difficulties in processing spectral F0 information as in the CI users.


2004 ◽  
Vol 15 (10) ◽  
pp. 678-691 ◽  
Author(s):  
Erin C. Schafer ◽  
Linda M. Thibodeau

Speech recognition was evaluated for ten adults with normal hearing and eight adults with Nucleus cochlear implants (CIs) at several different signal-to-noise ratios (SNRs) and with three frequency modulated (FM) system arrangements: desktop, body worn, and miniature direct connect. Participants were asked to repeat Hearing in Noise Test (HINT) sentences presented with speech noise in a classroom setting and percent correct word repetition was determined. Performance was evaluated for both normal-hearing and CI participants with the desktop soundfield system. In addition, speech recognition for the CI participants was evaluated using two FM systems electrically coupled to their speech processors. When comparing the desktop sound field and the No-FM condition, only the listeners with normal hearing made significant improvements in speech recognition in noise. When comparing the performance across the three FM conditions for the CI listeners, the two electrically coupled FM systems resulted in significantly greater improvements in speech recognition in noise relative to the desktop soundfield system.


Author(s):  
Leroy Holman Siahaan ◽  
Ali Hussin

Sociolinguistics is a study of the connection between language and society. People have a different language style when they interact with each one. By many variations that they have, it can be possible for them to mix their language in their utterance—mixing one language with the other languages, in the sociolinguistics field, is called by code-mixing. It has been common to use of code-mixing in society. It happens in the one of the public figures and famous that is Mr. Nadiem Makariem. Therefore, this research focuses on code mixing that emerges on video of Mr. Nadiem Makariem. The objective of this research is to find out the types and levels of code mixing that appear on the video. This research was descriptive qualitative method and the researchers act as the main instrument of this research. In collecting the data, this research utilized documentation method. This study employed content analysis focusing on analyzing the types of code mixing which defined by Hoffman and the levels of code mixing that argued by Suwito. Then, the result of types and levels of code mixing were counted by using Walizer’s formula. The result shows that there were 134 data in the types and levels of code mixing. In the types of code mixing, the highest type was intra-sentential of code mixing (88.8%) and the lowest type was involving a change of pronunciation (0%). While, in the levels of code mixing, word level (44.8%) becomes the dominant while word repetition (3%) and idiom (1.5%) were the lowest.


2020 ◽  
Vol 51 (3) ◽  
pp. 544-560 ◽  
Author(s):  
Kimberly A. Murphy ◽  
Emily A. Diehm

Purpose Morphological interventions promote gains in morphological knowledge and in other oral and written language skills (e.g., phonological awareness, vocabulary, reading, and spelling), yet we have a limited understanding of critical intervention features. In this clinical focus article, we describe a relatively novel approach to teaching morphology that considers its role as the key organizing principle of English orthography. We also present a clinical example of such an intervention delivered during a summer camp at a university speech and hearing clinic. Method Graduate speech-language pathology students provided a 6-week morphology-focused orthographic intervention to children in first through fourth grade ( n = 10) who demonstrated word-level reading and spelling difficulties. The intervention focused children's attention on morphological families, teaching how morphology is interrelated with phonology and etymology in English orthography. Results Comparing pre- and posttest scores, children demonstrated improvement in reading and/or spelling abilities, with the largest gains observed in spelling affixes within polymorphemic words. Children and their caregivers reacted positively to the intervention. Therefore, data from the camp offer preliminary support for teaching morphology within the context of written words, and the intervention appears to be a feasible approach for simultaneously increasing morphological knowledge, reading, and spelling. Conclusion Children with word-level reading and spelling difficulties may benefit from a morphology-focused orthographic intervention, such as the one described here. Research on the approach is warranted, and clinicians are encouraged to explore its possible effectiveness in their practice. Supplemental Material https://doi.org/10.23641/asha.12290687


2019 ◽  
Vol 28 (4) ◽  
pp. 986-992 ◽  
Author(s):  
Lisa R. Park ◽  
Erika B. Gagnon ◽  
Erin Thompson ◽  
Kevin D. Brown

Purpose The aims of this study were to (a) determine a metric for describing full-time use (FTU), (b) establish whether age at FTU in children with cochlear implants (CIs) predicts language at 3 years of age better than age at surgery, and (c) describe the extent of FTU and length of time it took to establish FTU in this population. Method This retrospective analysis examined receptive and expressive language outcomes at 3 years of age for 40 children with CIs. Multiple linear regression analyses were run with age at surgery and age at FTU as predictor variables. FTU definitions included 8 hr of device use and 80% of average waking hours for a typically developing child. Descriptive statistics were used to describe the establishment and degree of FTU. Results Although 8 hr of daily wear is typically considered FTU in the literature, the 80% hearing hours percentage metric accounts for more variability in outcomes. For both receptive and expressive language, age at FTU was found to be a better predictor of outcomes than age at surgery. It took an average of 17 months for children in this cohort to establish FTU, and only 52.5% reached this milestone by the time they were 3 years old. Conclusions Children with normal hearing can access spoken language whenever they are awake, and the amount of time young children are awake increases with age. A metric that incorporates the percentage of time that children with CIs have access to sound as compared to their same-aged peers with normal hearing accounts for more variability in outcomes than using an arbitrary number of hours. Although early FTU is not possible without surgery occurring at a young age, device placement does not guarantee use and does not predict language outcomes as well as age at FTU.


2015 ◽  
Vol 54 (06) ◽  
pp. 500-504 ◽  
Author(s):  
A. G. Maglione ◽  
A. Scorpecci ◽  
P. Malerba ◽  
P. Marsella ◽  
S. Giannantonio ◽  
...  

SummaryObjectives: The aim of the present study is to investigate the variations of the electroencephalographic (EEG) alpha rhythm in order to measure the appreciation of bilateral and unilateral young cochlear implant users during the observation of a musical cartoon. The cartoon has been modified for the generation of three experimental conditions: one with the original audio, another one with a distorted sound and, finally, a mute version.Methods: The EEG data have been recorded during the observation of the cartoons in the three experimental conditions. The frontal alpha EEG imbalance has been calculated as a measure of motivation and pleasantness to be compared across experimental populations and conditions.Results: The EEG frontal imbalance of the alpha rhythm showed significant variations during the perception of the different cartoons. In particular, the pattern of activation of normal-hearing children is very similar to the one elicited by the bilateral implanted patients. On the other hand, results related to the unilateral subjects do not present significant variations of the imbalance index across the three cartoons.Conclusion: The presented results suggest that the unilateral patients could not appreciate the difference in the audio format as well as bilaterally implanted and normal hearing subjects. The frontal alpha EEG imbalance is a useful tool to detect the differences in the appreciation of audiovisual stimuli in cochlear implant patients.


1976 ◽  
Vol 19 (2) ◽  
pp. 279-289 ◽  
Author(s):  
Randall B. Monsen

Although it is well known that the speech produced by the deaf is generally of low intelligibility, the sources of this low speech intelligibility have generally been ascribed either to aberrant articulation of phonemes or inappropriate prosody. This study was designed to determine to what extent a nonsegmental aspect of speech, formant transitions, may differ in the speech of the deaf and of the normal hearing. The initial second formant transitions of the vowels /i/ and /u/ after labial and alveolar consonants (/b, d, f/) were compared in the speech of six normal-hearing and six hearing-impaired adolescents. In the speech of the hearing-impaired subjects, the second formant transitions may be reduced both in time and in frequency. At its onset, the second formant may be nearer to its eventual target frequency than in the speech of the normal subjects. Since formant transitions are important acoustic cues for the adjacent consonants, reduced F 2 transitions may be an important factor in the low intelligibility of the speech of the deaf.


Author(s):  
Luodi Yu ◽  
Jiajing Zeng ◽  
Suiping Wang ◽  
Yang Zhang

Purpose This study aimed to examine whether abstract knowledge of word-level linguistic prosody is independent of or integrated with phonetic knowledge. Method Event-related potential (ERP) responses were measured from 18 adult listeners while they listened to native and nonnative word-level prosody in speech and in nonspeech. The prosodic phonology (speech) conditions included disyllabic pseudowords spoken in Chinese and in English matched for syllabic structure, duration, and intensity. The prosodic acoustic (nonspeech) conditions were hummed versions of the speech stimuli, which eliminated the phonetic content while preserving the acoustic prosodic features. Results We observed language-specific effects on the ERP that native stimuli elicited larger late negative response (LNR) amplitude than nonnative stimuli in the prosodic phonology conditions. However, no such effect was observed in the phoneme-free prosodic acoustic control conditions. Conclusions The results support the integration view that word-level linguistic prosody likely relies on the phonetic content where the acoustic cues embedded in. It remains to be examined whether the LNR may serve as a neural signature for language-specific processing of prosodic phonology beyond auditory processing of the critical acoustic cues at the suprasyllabic level.


Sign in / Sign up

Export Citation Format

Share Document