Non-referential gestures in adult and child speech: Are they prosodic?

Author(s):  
Stefanie Shattuck-Hufnagel ◽  
Ada Ren ◽  
Mili Mathew ◽  
Ivan Yuen ◽  
Katherine Demuth
Keyword(s):  
Author(s):  
Margaret Cychosz ◽  
Benjamin Munson ◽  
Jan R. Edwards
Keyword(s):  

2004 ◽  
Vol 31 (4) ◽  
pp. 855-873 ◽  
Author(s):  
MARC H. BORNSTEIN ◽  
DIANE B. LEACH ◽  
O. MAURICE HAYNES

We explored vocabulary competence in 55 firstborn and secondborn sibling pairs when each child reached 1;8 using multiple measures of maternal report, child speech, and experimenter assessment. Measures from each of the three sources were interrelated. Firstborns' vocabulary competence exceeded secondborns' only in maternal reports, not in child speech or in experimenter assessments. Firstborn girls outperformed boys on all vocabulary competence measures, and secondborn girls outperformed boys on most measures. Vocabulary competence was independent of the gender composition and, generally, of the age difference in sibling pairs. Vocabulary competence in firstborns and secondborns was only weakly related.


2011 ◽  
Vol 47 (1/2) ◽  
pp. 49-60
Author(s):  
Maria Fausta Pereira Castro

This work deals with argument construction in child speech, starting by submitting to revision some of the questions handled by the author all along her research on that subject. In order to stress the theoretical moves which have been made, two main questions were brought into discussion. Namely, the presence of arguments from the adult speech in the child utterances and the effect of argumentative utterances of the type x connective y in restraining deviation in dialogue, thus assuring both meaning and unity. However, the cohesive force of arguments is not free from being disrupted by dispersion and unpredictability. The unfolding of this theoretical perspective opens the way to a hypothesis both on language functioning and on the subjectivity who is constituded in it.


Author(s):  
Tristan J. Mahr ◽  
Visar Berisha ◽  
Kan Kawabata ◽  
Julie Liss ◽  
Katherine C. Hustad

Purpose Acoustic measurement of speech sounds requires first segmenting the speech signal into relevant units (words, phones, etc.). Manual segmentation is cumbersome and time consuming. Forced-alignment algorithms automate this process by aligning a transcript and a speech sample. We compared the phoneme-level alignment performance of five available forced-alignment algorithms on a corpus of child speech. Our goal was to document aligner performance for child speech researchers. Method The child speech sample included 42 children between 3 and 6 years of age. The corpus was force-aligned using the Montreal Forced Aligner with and without speaker adaptive training, triphone alignment from the Kaldi speech recognition engine, the Prosodylab-Aligner, and the Penn Phonetics Lab Forced Aligner. The sample was also manually aligned to create gold-standard alignments. We evaluated alignment algorithms in terms of accuracy (whether the interval covers the midpoint of the manual alignment) and difference in phone-onset times between the automatic and manual intervals. Results The Montreal Forced Aligner with speaker adaptive training showed the highest accuracy and smallest timing differences. Vowels were consistently the most accurately aligned class of sounds across all the aligners, and alignment accuracy increased with age for fricative sounds across the aligners too. Conclusion The best-performing aligner fell just short of human-level reliability for forced alignment. Researchers can use forced alignment with child speech for certain classes of sounds (vowels, fricatives for older children), especially as part of a semi-automated workflow where alignments are later inspected for gross errors. Supplemental Material https://doi.org/10.23641/asha.14167058


2007 ◽  
Vol 8 (1) ◽  
pp. 5-30 ◽  
Author(s):  
Marina Tzakosta

AbstractConsonant harmony (CH) is a phenomenon commonly found in child language. Cross-linguistically, Place of Articulation (PoA), specifically the Coronal Node, undergoes CH, while regressive harmony seems to be the preferred directionality that CH takes (cf. Goad 2001a, b; Levelt 1994; Rose 2000, 2001). In the present study, drawing on naturalistic data from nine children acquiring Greek L1, we place emphasis on the fact that multiple factors need to be considered in parallel, in order to account for CH patterns: Not only PoA, but also Manner of Articulation (MoA) contributes to CH; consequently, (de)voicing or continuity harmony emerges. Although regressive harmony is generally favoured, markedness scales and word stress highly affect directionality. Coronal, stop and voiceless segments trigger and undergo CH depending on their degree of prominence and their position in the word. Harmony can be partial or full, i.e. either place or manner or both place and manner of articulation are targeted. Progressive harmony emerges when the triggers belong to the stressed syllable or when they are stops. Cases of double, bidirectional and recursive harmony are also reported. In general, Greek CH patterns are the product of combined factors determined by phonological principles and input frequency in the ambient language. In other words, the degree to which Greek CH patterns are different from cross-linguistic findings depends on the combination of UG principles and language specific/environmental effects, as well as the prominence of certain of these factors over others.


Sign in / Sign up

Export Citation Format

Share Document