scholarly journals The Acquisition of Articulatory Timing for Liquids: Evidence From Child and Adult Speech

Author(s):  
Phil J. Howson ◽  
Melissa A. Redford

Purpose Liquids are among the last sounds to be acquired by English-speaking children. The current study considers their acquisition from an articulatory timing perspective by investigating anticipatory posturing for /l/ versus /ɹ/ in child and adult speech. Method In Experiment 1, twelve 5-year-old, twelve 8-year-old, and 11 college-aged speakers produced carrier phrases with penultimate stress on monosyllabic words that had /l/, /ɹ/, or /d/ (control) as singleton onsets and /æ/ or /u/ as the vowel. Short-domain anticipatory effects were acoustically investigated based on schwa formant values extracted from the preceding determiner (= the ) and dynamic formant values across the /ə#LV/ sequence. In Experiment 2, long-domain effects were perceptually indexed using a previously validated forward-gated audiovisual speech prediction task. Results Experiment 1 results indicated that all speakers distinguished /l/ from /ɹ/ along F3. Adults distinguished /l/ from /ɹ/ with a lower F2. Older children produced subtler versions of the adult pattern; their anticipatory posturing was also more influenced by the following vowel. Younger children did not distinguish /l/ from /ɹ/ along F2, but both liquids were distinguished from /d/ in the domains investigated. Experiment 2 results indicated that /ɹ/ was identified earlier than /l/ in gated adult speech; both liquids were identified equally early in 5-year-olds' speech. Conclusions The results are interpreted to suggest a pattern of early tongue–body retraction for liquids in /ə#LV/ sequences in children's speech. More generally, it is suggested that children must learn to inhibit the influence of vowels on liquid articulation to achieve an adultlike contrast between /l/ and /ɹ/ in running speech.

2018 ◽  
Author(s):  
Charles Kalish ◽  
Nigel Noll

Existing research suggests that adults and older children experience a tradeoff where instruction and feedback help them solve a problem efficiently, but lead them to ignore currently irrelevant information that might be useful in the future. It is unclear whether young children experience the same tradeoff. Eighty-seven children (ages five- to eight-years) and 42 adults participated in supervised feature prediction tasks either with or without an instructional hint. Follow-up tasks assessed learning of feature correlations and feature frequencies. Younger children tended to learn frequencies of both relevant and irrelevant features without instruction, but not the diagnostic feature correlation needed for the prediction task. With instruction, younger children did learn the diagnostic feature correlation, but then failed to learn the frequencies of irrelevant features. Instruction helped older children learn the correlation without limiting attention to frequencies. Adults learned the diagnostic correlation even without instruction, but with instruction no longer learned about irrelevant frequencies. These results indicate that young children do show some costs of learning with instruction characteristic of older children and adults. However, they also receive some of the benefits. The current study illustrates just what those tradeoffs might be, and how they might change over development.


2021 ◽  
pp. 1-12
Author(s):  
Hedieh Hashemi Hosseinabad ◽  
Karla N. Washington ◽  
Suzanne E. Boyce ◽  
Noah Silbert ◽  
Ann W. Kummer

<b><i>Purpose:</i></b> The purpose of this study was to investigate the clinical application of the Intelligibility in Context Scale (ICS) instrument in children with velopharyngeal insufficiency (VPI). This study investigated the relationship between clinical speech outcomes and parental reports of speech intelligibility across various communicative partners. <b><i>Methods:</i></b> The ICS was completed by the parents of 20 English-speaking children aged 4–12 years diagnosed with VPI. The parents were asked to rate their children’s speech intelligibility across communication partners using a 5-point scale. Clinical metrics obtained using standard clinical transcription on the Picture-Cued SNAP-R Test were: (1) percentage of consonants correct (PCC), (2) percentage of vowels correct (PVC), and (3) percentage of phonemes correct (PPC). Nasalance from nasometer data was included as an indirect measure of nasality. Intelligibility scores obtained from naive listener’s transcriptions and speech-language pathologists’ (SLP) ratings were compared with the ICS results. <b><i>Result:</i></b> Greater PCC, PPC, PVC, and transcription-based intelligibility values were significantly associated with higher ICS values, respectively (<i>r</i>[20] = 0.84, 0.82, 0.51, and 0.70, respectively; <i>p</i> &#x3c; 0.05 in all cases). There was a negative and significant correlation between ICS mean scores and SLP ratings of intelligibility (<i>r</i> = –0.74; <i>p</i> &#x3c; 0.001). There was no significant correlation between ICS values and nasalance scores (<i>r</i>[20] = –0.28; <i>p</i> = 0.22). <b><i>Conclusion:</i></b> The high correlations obtained between the ICS with PCC and PPC measures indicate that articulation accuracy has had a great impact on parents’ decision-making regarding intelligibility in this population. Significant agreement among ICS scores with naive listener transcriptions and clinical ratings supports use of the ICS in practice.


2021 ◽  
pp. 105566562098574
Author(s):  
Miriam Seifert ◽  
Amy Davies ◽  
Sam Harding ◽  
Sharynne McLeod ◽  
Yvonne Wren

Objective: To provide comparison data on the Intelligibility in Context Scale (ICS) for a sample of 3-year-old English-speaking children born with any cleft type. Design: Questionnaire data from the Cleft Collective Cohort Study were used. Descriptive and inferential statistics were carried out to determine difference according to children’s cleft type and syndromic status. Participants: A total of 412 children born with cleft lip and/or palate whose mothers had completed the ICS when their child was 3 years old. Main Outcome Measure(s): Mothers’ rating of their children’s intelligibility using the ICS. Results: The average ICS score for the total sample was 3.75 ( sometimes-usually intelligible; standard deviation [SD] = 0.76, 95% CIs = 3.68-3.83) of a possible score of 5 ( always intelligible). Children’s speech was reported to be most intelligible to their mothers (mean = 4.33, SD = 0.61, 95% CIs = 4.27-4.39) and least intelligible to strangers (mean = 3.36, SD = 1.00, 95% CIs = 3.26-3.45). There was strong evidence ( P < .001) for a difference in intelligibility between children with cleft lip only (n = 104, mean = 4.13, SD = 0.62, 95% CIs = 4.01-4.25) and children with any form of cleft palate (n = 308, mean = 3.63, SD = 0.76, 95% CIs = 3.52-3.71). Children born with cleft palate with or without cleft lip and an identified syndrome were rated as less intelligible (n = 63, mean = 3.28, SD = 0.85, 95% CIs = 3.06-3.49) compared to children who did not have a syndrome (n = 245, mean = 3.72, SD = 0.71, 95% CIs = 3.63-3.81). Conclusions: These results provide preliminary comparative data for clinical services using the outcome measures recommended by the International Consortium for Health Outcomes Measurement.


Author(s):  
Tristan J. Mahr ◽  
Visar Berisha ◽  
Kan Kawabata ◽  
Julie Liss ◽  
Katherine C. Hustad

Purpose Acoustic measurement of speech sounds requires first segmenting the speech signal into relevant units (words, phones, etc.). Manual segmentation is cumbersome and time consuming. Forced-alignment algorithms automate this process by aligning a transcript and a speech sample. We compared the phoneme-level alignment performance of five available forced-alignment algorithms on a corpus of child speech. Our goal was to document aligner performance for child speech researchers. Method The child speech sample included 42 children between 3 and 6 years of age. The corpus was force-aligned using the Montreal Forced Aligner with and without speaker adaptive training, triphone alignment from the Kaldi speech recognition engine, the Prosodylab-Aligner, and the Penn Phonetics Lab Forced Aligner. The sample was also manually aligned to create gold-standard alignments. We evaluated alignment algorithms in terms of accuracy (whether the interval covers the midpoint of the manual alignment) and difference in phone-onset times between the automatic and manual intervals. Results The Montreal Forced Aligner with speaker adaptive training showed the highest accuracy and smallest timing differences. Vowels were consistently the most accurately aligned class of sounds across all the aligners, and alignment accuracy increased with age for fricative sounds across the aligners too. Conclusion The best-performing aligner fell just short of human-level reliability for forced alignment. Researchers can use forced alignment with child speech for certain classes of sounds (vowels, fricatives for older children), especially as part of a semi-automated workflow where alignments are later inspected for gross errors. Supplemental Material https://doi.org/10.23641/asha.14167058


Author(s):  
Timothy B. Jay

This chapter investigates the emergence of English-speaking children’s taboo lexicon (taboo words, swear words, insults, and offensive words) between one and twelve years of age. It describes how the lexicon of taboo words children use shift over time to become more adult-like by age twelve. Less is reported regarding the question of what these taboo words mean to the children who say them. Judgments of ‘good’ words versus ‘bad’ words demonstrate that young children are more likely to judge mild words as bad than older children and adults. The methodological and ethical problems related to research on children’s use of taboo words are outlined as well as suggestions for conducting meaningful research with children in the future.


1977 ◽  
Vol 4 (1) ◽  
pp. 67-86 ◽  
Author(s):  
Ben G. Blount ◽  
Elise J. Padgug

ABSTRACTParents employ a special register when speaking to young children, containing features that mark it as appropriate for children who are beginning to acquire their language. Parental speech in English to 5 children (ages 0; 9–1; 6) and in Spanish to 4 children (ages 0; 8–1; 1 and 1; 6–1; 10) was analysed for the presence and distribution of these features. Thirty-four paralinguistic, prosodic, and interactional features were identified, and rate measures and proportions indicated developmental patterns and differences across languages. Younger children received a higher rate of features that marked affect; older children were addressed with more features that marked semantically meaningful speech. English-speaking parents relied comparatively more on paralinguistic and affective features, whereas Spanish-speaking parents used comparatively more interactional features. Despite these differences, there was a high degree of similarity across parents and languages for the most frequently occurring features.


1986 ◽  
Vol 13 (2) ◽  
pp. 275-292 ◽  
Author(s):  
M. J. Demetras ◽  
Kathryn Nolan Post ◽  
Catherine E. Snow

ABSTRACTThe conclusion that information regarding the grammatically of children's speech is unavailable in parental input has recently been challenged (Moerk 1983 a, b, Hirsh-Pasek, Treiman & Schneiderman 1984). The present study expanded on this research by broadening the definition of ‘negative feedback’ and by describing individual styles of mother–child dialogues. The purpose was to investigate whether mothers of four 2-year-old children responded differentially to their children's well-formed or ill-formed utterances with explicit and implicit feedback. The middle-class, English-speaking, mother–child dyads were recorded in a naturalistic context at home during play and eating activities. Explicit and implicit feedback were different in terms of the proportion of responses available to the child and their relation to well-formed and ill-formed utterances. The style of response was similar for most analyses across the four mothers.


2008 ◽  
Vol 35 (4) ◽  
pp. 809-822 ◽  
Author(s):  
SABINE VAN LINDEN ◽  
JEAN VROOMEN

ABSTRACTIn order to examine whether children adjust their phonetic speech categories, children of two age groups, five-year-olds and eight-year-olds, were exposed to a video of a face saying /aba/ or /ada/ accompanied by an auditory ambiguous speech sound halfway between /b/ and /d/. The effect of exposure to these audiovisual stimuli was measured on subsequently delivered auditory-only speech identification trials. Results were compared to a control condition in which the audiovisual exposure stimuli contained non-ambiguous and congruent sounds /aba/ or /ada/. The older children learned to categorize the initially ambiguous speech sound in accord with the previously seen lip-read information (i.e. recalibration), but this was not the case for the younger age group. Moreover, all children displayed a tendency to report the stimulus that they were exposed to during the exposure phase. Methodological improvements for adjusting such a response bias are discussed.


2012 ◽  
Vol 35 (1) ◽  
pp. 71-95 ◽  
Author(s):  
WING-CHEE SO ◽  
JIA-YI LIM ◽  
SEOK-HUI TAN

ABSTRACTThis paper explores whether English–Mandarin bilingual children have mastered discourse skills and whether they show sensitivity to the discourse principle of information status of referents in their speech and gestures. We compare the speech and gestures produced by bilingual children to those produced by English- and Mandarin-speaking monolingual children. Six English-speaking and six Mandarin-speaking monolingual children, and nine English–Mandarin bilingual children (who were more dominant in English) were videotaped while interacting with their caregivers. Monolingual Mandarin- and English-speaking children produced null arguments and pronouns respectively to indicate given third-person referents, and nouns to indicate new third-person referents. They also gestured new third-person referents more often than given third-person referents. Thus, monolinguals’ speech and gestures followed the discourse principle. English–Mandarin bilingual children's speech and gestures also followed the discourse principle but only when they were speaking in English. They produced nouns more often to indicate given third-person referents than to indicate new third-person referents in Mandarin, indicating the violation of the discourse principle. It is interesting that they gestured new third-person referents more often than given third-person referents in Mandarin. Thus, our findings suggest that gesture precedes language development at discourse level in the less-dominant language in bilinguals.


Sign in / Sign up

Export Citation Format

Share Document