Bootstrapping the lexicon: A computational model of infant speech segmentation

Cognition ◽  
2002 ◽  
Vol 83 (2) ◽  
pp. 167-206 ◽  
Author(s):  
Eleanor Olds Batchelder
2009 ◽  
Vol 125 (4) ◽  
pp. 2762-2762
Author(s):  
Suzanne Curtin ◽  
Linda Polka ◽  
Shani Abada ◽  
Reaper Sally‐Joy

2020 ◽  
Vol 60 ◽  
pp. 101448 ◽  
Author(s):  
Caroline Junge ◽  
Emma Everaert ◽  
Lyan Porto ◽  
Paula Fikkert ◽  
Maartje de Klerk ◽  
...  

2012 ◽  
Author(s):  
Ellen Marklund ◽  
Francisco Lacerda ◽  
Iris-Corinna Schwarz ◽  
Ulla Sundberg

Motor Control ◽  
2011 ◽  
Vol 15 (1) ◽  
pp. 85-117 ◽  
Author(s):  
Ian S. Howard ◽  
Piers Messum

Pronunciation is an important part of speech acquisition, but little attention has been given to the mechanism or mechanisms by which it develops. Speech sound qualities, for example, have just been assumed to develop by simple imitation. In most accounts this is then assumed to be by acoustic matching, with the infant comparing his output to that of his caregiver. There are theoretical and empirical problems with both of these assumptions, and we present a computational model—Elija—that does not learn to pronounce speech sounds this way. Elija starts by exploring the sound making capabilities of his vocal apparatus. Then he uses the natural responses he gets from a caregiver to learn equivalence relations between his vocal actions and his caregiver’s speech. We show that Elija progresses from a babbling stage to learning the names of objects. This demonstrates the viability of a non-imitative mechanism in learning to pronounce.


2010 ◽  
Vol 37 (3) ◽  
pp. 487-511 ◽  
Author(s):  
DANIEL BLANCHARD ◽  
JEFFREY HEINZ ◽  
ROBERTA GOLINKOFF

ABSTRACTHow do infants find the words in the speech stream? Computational models help us understand this feat by revealing the advantages and disadvantages of different strategies that infants might use. Here, we outline a computational model of word segmentation that aims both to incorporate cues proposed by language acquisition researchers and to establish the contributions different cues can make to word segmentation. We present experimental results from modified versions of Venkataraman's (2001) segmentation model that examine the utility of: (1) language-universal phonotactic cues; (2) language-specific phonotactic cues which must be learned while segmenting utterances; and (3) their combination. We show that the language-specific cue improves segmentation performance overall, but the language-universal phonotactic cue does not, and that their combination results in the most improvement. Not only does this suggest that language-specific constraints can be learned simultaneously with speech segmentation, but it is also consistent with experimental research that shows that there are multiple phonotactic cues helpful to segmentation (e.g. Mattys, Jusczyk, Luce & Morgan, 1999; Mattys & Jusczyk, 2001). This result also compares favorably to other segmentation models (e.g. Brent, 1999; Fleck, 2008; Goldwater, 2007; Johnson & Goldwater, 2009; Venkataraman, 2001) and has implications for how infants learn to segment.


2013 ◽  
pp. n/a-n/a ◽  
Author(s):  
Nicole Altvater-Mackensen ◽  
Nivedita Mani

Infancy ◽  
2008 ◽  
Vol 13 (1) ◽  
pp. 57-74 ◽  
Author(s):  
Leher Singh ◽  
Sarah S. Nestor ◽  
Heather Bortfeld

2010 ◽  
Vol 8 (2-3) ◽  
pp. 133-168 ◽  
Author(s):  
Daniel Duran ◽  
Hinrich Schütze ◽  
Bernd Möbius ◽  
Michael Walsh

1994 ◽  
Vol 96 (5) ◽  
pp. 3293-3293
Author(s):  
Neil P. McAngus Todd ◽  
Guy Brown

Infancy ◽  
2018 ◽  
Vol 23 (6) ◽  
pp. 770-794 ◽  
Author(s):  
Evan Kidd ◽  
Caroline Junge ◽  
Tara Spokes ◽  
Lauren Morrison ◽  
Anne Cutler

Sign in / Sign up

Export Citation Format

Share Document