Ages of Speech Sound Acquisition

1986 ◽  
Vol 17 (3) ◽  
pp. 175-186 ◽  
Author(s):  
Ann B. Smit

Published investigations of normative ages of speech sound acquisition vary in the kind of speech sample on which they are based and in the methods used. Review of norms based on elicited single-word responses shows that the methods used can influence the resulting ages of acquisition for specific speech sounds. Review of the investigations which sample children's spontaneous speech indicates that these data have characteristics that make them difficult to use as norms. Desirable characteristics of normative investigations of speech sound acquisition are proposed.

Author(s):  
Li-Li Yeh ◽  
Chia-Chi Liu

Purpose Speech-language pathologists (SLPs) are faced with the challenge of quickly and accurately identifying children who present with speech sound disorders (SSD) compared to typically developing (TD) children. The goal of this study was to compare the clinical relevance of two speech sampling methods (single-word vs. connected speech samples) in how sensitive they are in detecting atypical speech sound development in children, and to know whether the information obtained from single-word samples is representative enough of children's overall speech sound performance. Method We compared the speech sound performance of 37 preschool children with SSD ( M age = 4;11 years) and 37 age-sex-matched typically developing children ( M age = 5;0 years) by eliciting their speech in two ways: (a) a picture-naming task to elicit single words, and (b) a story-retelling task to elicit connected speech. Four speech measures were compared across sample type (single words vs. connected speech) and across groups (SSD vs. TD): intelligibility, speech accuracy, phonemic inventory, and phonological patterns. Results Interaction effects were found between sample type and group on several speech sound performance measures. Single-word speech samples were found to differentiate the SSD group from the TD group, and were more sensitive than connected speech samples across various measures. The effect size of single-word samples was consistently higher than connected speech samples for three measures: intelligibility, speech accuracy, and phonemic inventory. The gap in sample type informativeness may be attributed to salience and avoidance effects, given that children tend to avoid producing unfamiliar phonemes in connected speech. The number of phonological patterns produced was the only measure that revealed no gap between two sampling types for both groups. Conclusions On measures of intelligibility, speech accuracy, and phonemic inventory, obtaining a single-word sample proved to be a more informative method of differentiating children with SSD from TD children than connected speech samples. This finding may guide SLPs in their choice of sampling type when they are under time pressure. We discuss how children's performance on the connected speech sample may be biased by salience and avoidance effects and/or task design, and may, therefore, not necessarily reveal a poorer performance than single-word samples, particularly in intelligibility, speech accuracy, and the number of phonological patterns, if these task limitations are circumvented. Our findings show that the performance gap, typically observed between the two sampling types, largely depends on which performance measures are evaluated with the speech sample. Our study is the first to address sampling type differences in SSD versus TD children and has significant clinical implications for SLPs looking for sampling types and measures that reliably identify SSD in preschool-aged children.


2013 ◽  
Vol 56 (2) ◽  
pp. 531-541 ◽  
Author(s):  
Frances M. Pomaville ◽  
Chris N. Kladopoulos

Purpose In this study, the authors examined the treatment efficacy of a behavioral speech therapy protocol for adult cochlear implant recipients. Method The authors used a multiple-baseline, across-behaviors and -participants design to examine the effectiveness of a therapy program based on behavioral principles and methods to improve the production of target speech sounds in 3 adults with cochlear implants. The authors included probe items in a baseline protocol to assess generalization of target speech sounds to untrained exemplars. Pretest and posttest scores from the Arizona Articulation Proficiency Scale, Third Revision (Arizona–3; Fudala, 2000) and measurement of speech errors during spontaneous speech were compared, providing additional measures of target behavior generalization. Results The results of this study provided preliminary evidence supporting the overall effectiveness and efficiency of a behavioral speech therapy program in increasing percent correct speech sound production in adult cochlear implant recipients. The generalization of newly trained speech skills to untrained words and to spontaneous speech was demonstrated. Conclusion These preliminary findings support the application of behavioral speech therapy techniques for training speech sound production in adults with cochlear implants. Implications for future research and the development of aural rehabilitation programs for adult cochlear implant recipients are discussed.


2018 ◽  
Vol 15 (2) ◽  
pp. 104-110 ◽  
Author(s):  
Shohei Kato ◽  
Akira Homma ◽  
Takuto Sakuma

Objective: This study presents a novel approach for early detection of cognitive impairment in the elderly. The approach incorporates the use of speech sound analysis, multivariate statistics, and data-mining techniques. We have developed a speech prosody-based cognitive impairment rating (SPCIR) that can distinguish between cognitively normal controls and elderly people with mild Alzheimer's disease (mAD) or mild cognitive impairment (MCI) using prosodic signals extracted from elderly speech while administering a questionnaire. Two hundred and seventy-three Japanese subjects (73 males and 200 females between the ages of 65 and 96) participated in this study. The authors collected speech sounds from segments of dialogue during a revised Hasegawa's dementia scale (HDS-R) examination and talking about topics related to hometown, childhood, and school. The segments correspond to speech sounds from answers to questions regarding birthdate (T1), the name of the subject's elementary school (T2), time orientation (Q2), and repetition of three-digit numbers backward (Q6). As many prosodic features as possible were extracted from each of the speech sounds, including fundamental frequency, formant, and intensity features and mel-frequency cepstral coefficients. They were refined using principal component analysis and/or feature selection. The authors calculated an SPCIR using multiple linear regression analysis. Conclusion: In addition, this study proposes a binary discrimination model of SPCIR using multivariate logistic regression and model selection with receiver operating characteristic curve analysis and reports on the sensitivity and specificity of SPCIR for diagnosis (control vs. MCI/mAD). The study also reports discriminative performances well, thereby suggesting that the proposed approach might be an effective tool for screening the elderly for mAD and MCI.


2019 ◽  
Vol 42 (1) ◽  
pp. 31-39
Author(s):  
Krystal L. Werfel ◽  
Marren C. Brooks ◽  
Lisa Fitton

Although speech–language pathologists increasingly make use of tablets in clinical practice, little research to date has evaluated the effectiveness or efficiency of tablet use for targeting speech sound goals. The twofold purpose of this study was to compare (a) the effectiveness and (b) the efficiency of speech sound intervention using tablets versus flashcards. Four kindergarten students with at least two similar speech sound errors participated in this adapted alternating treatments single subject design study that explored the functional relation between speech sound intervention that differed by modality of delivery (tablet vs. flashcards) and increased speech sound skill in elementary school children with speech sound errors. Flashcards and tablets were both effective single-word speech sound intervention modalities; however, for three of the four participants, flashcards were more efficient than tablets.


1982 ◽  
Vol 47 (2) ◽  
pp. 181-189 ◽  
Author(s):  
Carl A. Binnie ◽  
Raymond G. Daniloff ◽  
Hugh W. Buckingham

The speech of a five-year-old boy who suffered a profound hearing loss following meningitis was sampled at two-week intervals for nine months. Speech samples were subjected to phonetic transcription, spectrographic analysis, and intelligibility testing. Immediately post-trauma, the child displayed slightly slower, F o elevated, acoustically intense speech in which phonemic distortion and syllabification of consonants occurred occasionally; single word intelligibility was depressed below normal between 20–30%. By the 18th week, a sudden decline in intelligibility, increasing monotony of pitch, and a pattern of strongly emphatic, prolonged, aspirated, syllabified, and increasingly distorted consonants were manifest. At year's end, the child's speech bore some resemblance to the speech of the deaf in terms of suprasegmentals, intonation, and intelligibility, but differed because the child rarely, if ever deleted speech sounds or diphthongized vowels strongly. It is speculated that phonetic processes such as diphthongization, syllabification, and prolonged duration may be strategies for enhancing feedback during speech.


Author(s):  
Сhunxia Kong ◽  

The article discusses unprepared reading in a non-native language and shows it to have all the signs of spontaneity that are traditionally considered integral characteristics of any spontaneous speech: hesitation pauses, both physical (ɭ) and filled with non-speech sounds (uh, m-m), word breaks, reading the whole word or part of it by syllables, vocalization of a consonant, and so forth. The material for the analysis included 40 monologues of reading the story by M. Zoshchenko Fantasy Shirt and a non-plot excerpt from V. Korolenko’s story The Blind Musician recorded from 20 Chinese informants. All the monologues are included in the block of Russian interfering speech of the Chinese as part of the monologic speech corpus Ba­lanced Annotated Text Library. As the analysis showed, it is more often that there is not one sign of spontaneity but a whole complex of such signs, and together they fill hesitation pauses, help the speaker to control the quality of speech or correct what was said, etc. In addition, the occurrence of various signs of spontaneity in the course of unprepared reading is closely related to the individual characteristics of the speaker/reader. In general, we have found that there are more signs of spontaneity in the speech of men (3,244 cases; 40.7 %) than in the speech of women (2,049; 27.7 %), in the speech of informants with a lower level of proficiency in Russian B2 (2,993; 37.9 %) than in the speech of informants with a higher level C1 (2,300; 30.8 %), in the speech of extroverts (1,521; 38 %) than in the speech of ambiverts (1,694; 35,2 %) and introverts (2,078; 31,7 %). As to the type of the source text, there turned out to be more signs of spontaneity in monologues of reading a plot text than in monologues of reading a non-plot text (3,031; 40.3 vs 2,283; 31 %). The paper concludes that reading should be recognized as a spontaneous type of speech activity.


1974 ◽  
Vol 17 (3) ◽  
pp. 352-366 ◽  
Author(s):  
Lorraine M. Monnin ◽  
Dorothy A. Huntington

Normal-speaking and speech-defective children were compared on a speech-sound identification task which included sounds the speech-defective subjects misarticulated and sounds they articulated correctly. The identification task included four tests: [r]-[w] contrasts, acoustically similar contrasts, acoustically dissimilar contrasts, and vowel contrasts. The speech sounds were presented on a continuum from undistorted signals to severely distorted speech signals under conditions which have caused confusion among adults. Subjects included 15 normal-speaking kindergarten children, 15 kindergarten children with defective [r]s, and 15 preschool-age children. The procedure employed was designed to test, in depth, each sound under study and to minimize extraneous variables. Speech-sound identification ability of speech-defective subjects was found to be specific rather than a general deficiency, indicating a positive relationship between production and identification ability.


Author(s):  
Aidan Kehoe ◽  
Flaithri Neff ◽  
Ian Pitt

There are numerous challenges to accessing user assistance information in mobile and ubiquitous computing scenarios. For example, there may be little-or-no display real estate on which to present information visually, the user’s eyes may be busy with another task (e.g., driving), it can be difficult to read text while moving, etc. Speech, together with non-speech sounds and haptic feedback can be used to make assistance information available to users in these situations. Non-speech sounds and haptic feedback can be used to cue information that is to be presented to users via speech, ensuring that the listener is prepared and that leading words are not missed. In this chapter, we report on two studies that examine user perception of the duration of a pause between a cue (which may be a variety of non-speech sounds, haptic effects or combined non-speech sound plus haptic effects) and the subsequent delivery of assistance information using speech. Based on these user studies, recommendations for use of cue pause intervals in the range of 600 ms to 800 ms are made.


Sign in / Sign up

Export Citation Format

Share Document