Sharing the beginning versus sharing the end: Spoken word recognition in the visual world paradigm in Japanese

2011 ◽  
Vol 130 (4) ◽  
pp. 2570-2570
Author(s):  
Hideko Teruya ◽  
Vsevolod Kapatsinski
2019 ◽  
Vol 72 (11) ◽  
pp. 2574-2583 ◽  
Author(s):  
Julie Gregg ◽  
Albrecht W Inhoff ◽  
Cynthia M Connine

Spoken word recognition models incorporate the temporal unfolding of word information by assuming that positional match constrains lexical activation. Recent findings challenge the linearity constraint. In the visual world paradigm, Toscano, Anderson, and McMurray observed that listeners preferentially viewed a picture of a target word’s anadrome competitor (e.g., competitor bus for target sub) compared with phonologically unrelated distractors (e.g., well) or competitors sharing an overlapping vowel (e.g., sun). Toscano et al. concluded that spoken word recognition relies on coarse grain spectral similarity for mapping spoken input to a lexical representation. Our experiments aimed to replicate the anadrome effect and to test the coarse grain similarity account using competitors without vowel position overlap (e.g., competitor leaf for target flea). The results confirmed the original effect: anadrome competitor fixation curves diverged from unrelated distractors approximately 275 ms after the onset of the target word. In contrast, the no vowel position overlap competitor did not show an increase in fixations compared with the unrelated distractors. The contrasting results for the anadrome and no vowel position overlap items are discussed in terms of theoretical implications of sequential match versus coarse grain similarity accounts of spoken word recognition. We also discuss design issues (repetition of stimulus materials and display parameters) concerning the use of the visual world paradigm in making inferences about online spoken word recognition.


2019 ◽  
Author(s):  
Debra Titone ◽  
Jason Gullifer ◽  
Shari Baum

We investigated whether bilingual older adults experience within- and cross-language competition during spoken word recognition similarly to younger adults matched on age of second language (L2) acquisition, objective and subjective L2 proficiency, and current L2 exposure. In a visual world eye-tracking paradigm, older and younger adults, who were French-dominant or English-dominant English-French bilinguals, listened to English words, and looked at pictures including the target (field), a within-language competitor (feet) or cross-language (French) competitor (fille, “girl”), and unrelated filler pictures while their eye movements were monitored. Older adults showed evidence of greater within-language competition as a function of increased target and competitor phonological overlap. There was some evidence of age-related differences in cross-language competition, however, it was quite small overall and varied as a function of target language proficiency. These results suggest that greater within- and possibly cross-language lexical competition during spoken word recognition may underlie some of the communication difficulties encountered by healthy bilingual older adults.


2018 ◽  
Vol 1 (2) ◽  
pp. 221-250 ◽  
Author(s):  
Keerthana Kapiley ◽  
Ramesh Kumar Mishra

Abstract Two experiments using the visual-world paradigm examined whether culture-specific images influence the activation of translation equivalents during spoken-word recognition in bilinguals. In Experiment 1, the participants performed a visual-world task during which they were asked to click on the target after the spoken word (L1 or L2). In Experiment 2, the participants were presented with culture-specific images (faces representing L1, L2 and Neutral) during the visual world task. Time-course analysis of Experiment 1 revealed that there were a significantly higher number of looks to TE-cohort member compared to distractors only when participants heard to L2 words. In Experiment 2, when the cultural-specific images were congruent with the spoken word’s language, participants deployed higher number of looks to TE-cohort member compared to distractors. This effect was seen in both the language directions but not when the culture-specific images were incongruent with the spoken word. The eyetracking data suggest that culture-specific images influence cross-linguistic activation of semantics during bilingual audio-visual language processing.


2009 ◽  
Author(s):  
Julie Mercier ◽  
Irina Pivneva ◽  
Corinne Haigh ◽  
Debra A. Titone

Sign in / Sign up

Export Citation Format

Share Document