The challenges and uncertainties when hand measuring F1/F2 values in child speech: Towards an automated approach

2021 ◽  
Vol 150 (4) ◽  
pp. A152-A152
Author(s):  
Maame Agyarko ◽  
Marisha Speights Atkins
Keyword(s):  
Author(s):  
Stefanie Shattuck-Hufnagel ◽  
Ada Ren ◽  
Mili Mathew ◽  
Ivan Yuen ◽  
Katherine Demuth
Keyword(s):  

Author(s):  
Margaret Cychosz ◽  
Benjamin Munson ◽  
Jan R. Edwards
Keyword(s):  

2004 ◽  
Vol 31 (4) ◽  
pp. 855-873 ◽  
Author(s):  
MARC H. BORNSTEIN ◽  
DIANE B. LEACH ◽  
O. MAURICE HAYNES

We explored vocabulary competence in 55 firstborn and secondborn sibling pairs when each child reached 1;8 using multiple measures of maternal report, child speech, and experimenter assessment. Measures from each of the three sources were interrelated. Firstborns' vocabulary competence exceeded secondborns' only in maternal reports, not in child speech or in experimenter assessments. Firstborn girls outperformed boys on all vocabulary competence measures, and secondborn girls outperformed boys on most measures. Vocabulary competence was independent of the gender composition and, generally, of the age difference in sibling pairs. Vocabulary competence in firstborns and secondborns was only weakly related.


2011 ◽  
Vol 47 (1/2) ◽  
pp. 49-60
Author(s):  
Maria Fausta Pereira Castro

This work deals with argument construction in child speech, starting by submitting to revision some of the questions handled by the author all along her research on that subject. In order to stress the theoretical moves which have been made, two main questions were brought into discussion. Namely, the presence of arguments from the adult speech in the child utterances and the effect of argumentative utterances of the type x connective y in restraining deviation in dialogue, thus assuring both meaning and unity. However, the cohesive force of arguments is not free from being disrupted by dispersion and unpredictability. The unfolding of this theoretical perspective opens the way to a hypothesis both on language functioning and on the subjectivity who is constituded in it.


Author(s):  
Tristan J. Mahr ◽  
Visar Berisha ◽  
Kan Kawabata ◽  
Julie Liss ◽  
Katherine C. Hustad

Purpose Acoustic measurement of speech sounds requires first segmenting the speech signal into relevant units (words, phones, etc.). Manual segmentation is cumbersome and time consuming. Forced-alignment algorithms automate this process by aligning a transcript and a speech sample. We compared the phoneme-level alignment performance of five available forced-alignment algorithms on a corpus of child speech. Our goal was to document aligner performance for child speech researchers. Method The child speech sample included 42 children between 3 and 6 years of age. The corpus was force-aligned using the Montreal Forced Aligner with and without speaker adaptive training, triphone alignment from the Kaldi speech recognition engine, the Prosodylab-Aligner, and the Penn Phonetics Lab Forced Aligner. The sample was also manually aligned to create gold-standard alignments. We evaluated alignment algorithms in terms of accuracy (whether the interval covers the midpoint of the manual alignment) and difference in phone-onset times between the automatic and manual intervals. Results The Montreal Forced Aligner with speaker adaptive training showed the highest accuracy and smallest timing differences. Vowels were consistently the most accurately aligned class of sounds across all the aligners, and alignment accuracy increased with age for fricative sounds across the aligners too. Conclusion The best-performing aligner fell just short of human-level reliability for forced alignment. Researchers can use forced alignment with child speech for certain classes of sounds (vowels, fricatives for older children), especially as part of a semi-automated workflow where alignments are later inspected for gross errors. Supplemental Material https://doi.org/10.23641/asha.14167058


Sign in / Sign up

Export Citation Format

Share Document