Iconic culture-specific images influence language non-selective translation activation in bilinguals

2018 ◽  
Vol 1 (2) ◽  
pp. 221-250 ◽  
Author(s):  
Keerthana Kapiley ◽  
Ramesh Kumar Mishra

Abstract Two experiments using the visual-world paradigm examined whether culture-specific images influence the activation of translation equivalents during spoken-word recognition in bilinguals. In Experiment 1, the participants performed a visual-world task during which they were asked to click on the target after the spoken word (L1 or L2). In Experiment 2, the participants were presented with culture-specific images (faces representing L1, L2 and Neutral) during the visual world task. Time-course analysis of Experiment 1 revealed that there were a significantly higher number of looks to TE-cohort member compared to distractors only when participants heard to L2 words. In Experiment 2, when the cultural-specific images were congruent with the spoken word’s language, participants deployed higher number of looks to TE-cohort member compared to distractors. This effect was seen in both the language directions but not when the culture-specific images were incongruent with the spoken word. The eyetracking data suggest that culture-specific images influence cross-linguistic activation of semantics during bilingual audio-visual language processing.


2019 ◽  
Vol 72 (11) ◽  
pp. 2574-2583 ◽  
Author(s):  
Julie Gregg ◽  
Albrecht W Inhoff ◽  
Cynthia M Connine

Spoken word recognition models incorporate the temporal unfolding of word information by assuming that positional match constrains lexical activation. Recent findings challenge the linearity constraint. In the visual world paradigm, Toscano, Anderson, and McMurray observed that listeners preferentially viewed a picture of a target word’s anadrome competitor (e.g., competitor bus for target sub) compared with phonologically unrelated distractors (e.g., well) or competitors sharing an overlapping vowel (e.g., sun). Toscano et al. concluded that spoken word recognition relies on coarse grain spectral similarity for mapping spoken input to a lexical representation. Our experiments aimed to replicate the anadrome effect and to test the coarse grain similarity account using competitors without vowel position overlap (e.g., competitor leaf for target flea). The results confirmed the original effect: anadrome competitor fixation curves diverged from unrelated distractors approximately 275 ms after the onset of the target word. In contrast, the no vowel position overlap competitor did not show an increase in fixations compared with the unrelated distractors. The contrasting results for the anadrome and no vowel position overlap items are discussed in terms of theoretical implications of sequential match versus coarse grain similarity accounts of spoken word recognition. We also discuss design issues (repetition of stimulus materials and display parameters) concerning the use of the visual world paradigm in making inferences about online spoken word recognition.





2020 ◽  
Author(s):  
Keith S Apfelbaum ◽  
Jamie Klein-Packard ◽  
Bob McMurray

A common critique of the Visual World Paradigm (VWP) in psycholinguistic studies is that what is designed as a measure of language processes is corrupted by the visual context of the task. This is crucial, particularly in studies of spoken word recognition, where the displayed images are usually seen as just a part of the measure and are not of fundamental interest. Many variants of the VWP allow participants to sample the visual scene before a trial begins. However, this could bias their interpretations of the later speech or even lead to abnormal processing strategies (e.g., comparing the input to only preactivated working memory representations). Prior work has focused only on whether preview duration changes fixation patterns. However, preview could affect a number of processes, such as visual search, that would not challenge the interpretation of the VWP. The present study uses a series of targeted manipulations of the preview period to ask if preview alters looking behavior during a trial, and why. Results show that standard psycholinguistic effects seen in the VWP are not dependent on preview, and are not enhanced by explicit phonological prenaming. Moreover, some forms of preview can eliminate nuisance variance deriving from object recognition and visual search demands in order to produce a more sensitive measure of linguistic processing. These results deepen our understanding of how the visual scene interacts with language processing to drive fixations patterns in the VWP, and reinforce the value of the VWP as a tool for measuring real-time language processing.



Author(s):  
Llorenç Andreu ◽  
Mònica Sanz-Torrent

Eye movements have become a commonly used response measure in studies of spoken language processing. These studies are included in the so-called ‘visual world paradigm' in which participants' eye movements are monitored during scene viewing in language comprehension and production activities. In this chapter the most important aspects for running eye-tracking studies in children are revised. Developmental studies using eye movements have increased in the last ten years from babies to adolescents. However, there are only a handful of papers based on the ‘visual world paradigm' that analyze the spoken language in children with language disorders. These studies using eye movements have explored spoken word recognition; verb argument and thematic relations; and narrative comprehension and production. Results has proven eye tracker to be an effective tool for understanding language representation and processing in children with language disorders.



1997 ◽  
Author(s):  
Paul D. Allopenna ◽  
James S. Magnuson ◽  
Michael K. Tanenhaus


1998 ◽  
Vol 38 (4) ◽  
pp. 419-439 ◽  
Author(s):  
Paul D. Allopenna ◽  
James S. Magnuson ◽  
Michael K. Tanenhaus


Author(s):  
Christina Blomquist ◽  
Rochelle S. Newman ◽  
Yi Ting Huang ◽  
Jan Edwards

Purpose Children with cochlear implants (CIs) are more likely to struggle with spoken language than their age-matched peers with normal hearing (NH), and new language processing literature suggests that these challenges may be linked to delays in spoken word recognition. The purpose of this study was to investigate whether children with CIs use language knowledge via semantic prediction to facilitate recognition of upcoming words and help compensate for uncertainties in the acoustic signal. Method Five- to 10-year-old children with CIs heard sentences with an informative verb ( draws ) or a neutral verb ( gets ) preceding a target word ( picture ). The target referent was presented on a screen, along with a phonologically similar competitor ( pickle ). Children's eye gaze was recorded to quantify efficiency of access of the target word and suppression of phonological competition. Performance was compared to both an age-matched group and vocabulary-matched group of children with NH. Results Children with CIs, like their peers with NH, demonstrated use of informative verbs to look more quickly to the target word and look less to the phonological competitor. However, children with CIs demonstrated less efficient use of semantic cues relative to their peers with NH, even when matched for vocabulary ability. Conclusions Children with CIs use semantic prediction to facilitate spoken word recognition but do so to a lesser extent than children with NH. Children with CIs experience challenges in predictive spoken language processing above and beyond limitations from delayed vocabulary development. Children with CIs with better vocabulary ability demonstrate more efficient use of lexical-semantic cues. Clinical interventions focusing on building knowledge of words and their associations may support efficiency of spoken language processing for children with CIs. Supplemental Material https://doi.org/10.23641/asha.14417627



Sign in / Sign up

Export Citation Format

Share Document