scholarly journals Relative Contribution of Auditory and Visual Information to Mandarin Chinese Tone Identification by Native and Tone-naïve Listeners

2019 ◽  
Vol 63 (4) ◽  
pp. 856-876
Author(s):  
Yueqiao Han ◽  
Martijn Goudbeek ◽  
Maria Mos ◽  
Marc Swerts

Speech perception is a multisensory process: what we hear can be affected by what we see. For instance, the McGurk effect occurs when auditory speech is presented in synchrony with discrepant visual information. A large number of studies have targeted the McGurk effect at the segmental level of speech (mainly consonant perception), which tends to be visually salient (lip-reading based), while the present study aims to extend the existing body of literature to the suprasegmental level, that is, investigating a McGurk effect for the identification of tones in Mandarin Chinese. Previous studies have shown that visual information does play a role in Chinese tone perception, and that the different tones correlate with variable movements of the head and neck. We constructed various tone combinations of congruent and incongruent auditory-visual materials (10 syllables with 16 tone combinations each) and presented them to native speakers of Mandarin Chinese and speakers of tone-naïve languages. In line with our previous work, we found that tone identification varies with individual tones, with tone 3 (the low-dipping tone) being the easiest one to identify, whereas tone 4 (the high-falling tone) was the most difficult one. We found that both groups of participants mainly relied on auditory input (instead of visual input), and that the auditory reliance for Chinese subjects was even stronger. The results did not show evidence for auditory-visual integration among native participants, while visual information is helpful for tone-naïve participants. However, even for this group, visual information only marginally increases the accuracy in the tone identification task, and this increase depends on the tone in question.

1996 ◽  
Vol 18 (4) ◽  
pp. 403-432 ◽  
Author(s):  
Nobuko Chikamatsu

This paper examines the effects of a first language (L1 ) orthographic system on second language (L2) word recognition strategies. Lexical judgment tests using Japanese kana (a syllabic script consisting of hiragana and katakana) were given to native English and native Chinese learners of Japanese. The visual familiarity and length in test words were controlled to examine the involvement of phonological or visual coding in word recognition strategies. The responses of the English and Chinese subjects were compared on the basis of observed reaction time. The results indicated that (a) Chinese subjects relied more on the visual information in L2 Japanese kana words than did English subjects and (b) English subjects utilized the phonological information in Japanese kana words more than did Chinese subjects. Accordingly, these findings demonstrate that native speakers of English and Chinese utilize different word recognition strategies due to L1 orthographic characteristics, and such L1 word recognition strategies are transferred into L2 Japanese kana word recognition.


2020 ◽  
Vol 21 (1) ◽  
pp. 349-358
Author(s):  
O. Brendel

A problematic issue that frequently arises in the examination of video and audio recordings, namely the question of visual and auditory perception of oral speech – the establishment of the content of a conversation based on its image (lip reading) – is considered. The article purpose is to analyze the possibility and feasibility of examining the visual-auditory perception of oral speech in the framework of the examination of video and sound recordings, considering the peculiarities of such research; the ability to use visual information either as an independent object of examination (lip reading), or as a supplementary, additional to auditory analysis of a particular message. The main components of the process of lip reading, the possibility of visual examination of visual and auditory information in order to establish the content of a conversation are considered. Attention is paid to the features of visual and auditory perception of oral speech, and the factors that contribute enormously to the informative nature of the overall picture of oral speech perception by an image are analyzed. The influence of the visual image on the speech perception by an image is considered, such as active articulation, facial expressions, head movement, position of teeth, gestures, etc. In addition to the quality of the image, the duration of the speech fragment also affects the perception of oral speech by the image: a fully uttered expression is usually read better than its individual parts. The article also draws attention to the ambiguity of articulatory images of sounds. The features of the McGurk effect – a perception phenomenon that demonstrates the interaction between hearing and vision while the perception of speech – are considered. The analysis of the possibility and feasibility of examining visual and auditory perception of oral speech within the framework of the examination of video and sound recordings is carried out, and the peculiarities of such research are highlighted.


2015 ◽  
Vol 233 (9) ◽  
pp. 2581-2586 ◽  
Author(s):  
John F. Magnotti ◽  
Debshila Basu Mallick ◽  
Guo Feng ◽  
Bin Zhou ◽  
Wen Zhou ◽  
...  

2015 ◽  
Author(s):  
Nancy F. Chen ◽  
Rong Tong ◽  
Darren Wee ◽  
Peixuan Lee ◽  
Bin Ma ◽  
...  

2021 ◽  
Author(s):  
Shachar Sherman ◽  
Koichi Kawakami ◽  
Herwig Baier

The brain is assembled during development by both innate and experience-dependent mechanisms1-7, but the relative contribution of these factors is poorly understood. Axons of retinal ganglion cells (RGCs) connect the eye to the brain, forming a bottleneck for the transmission of visual information to central visual areas. RGCs secrete molecules from their axons that control proliferation, differentiation and migration of downstream components7-9. Spontaneously generated waves of retinal activity, but also intense visual stimulation, can entrain responses of RGCs10 and central neurons11-16. Here we asked how the cellular composition of central targets is altered in a vertebrate brain that is depleted of retinal input throughout development. For this, we first established a molecular catalog17 and gene expression atlas18 of neuronal subpopulations in the retinorecipient areas of larval zebrafish. We then searched for changes in lakritz (atoh7-) mutants, in which RGCs do not form19. Although individual forebrain-expressed genes are dysregulated in lakritz mutants, the complete set of 77 putative neuronal cell types in thalamus, pretectum and tectum are present. While neurogenesis and differentiation trajectories are overall unaltered, a greater proportion of cells remain in an uncommitted progenitor stage in the mutant. Optogenetic stimulation of a pretectal area20,21 evokes a visual behavior in blind mutants indistinguishable from wildtype. Our analysis shows that, in this vertebrate visual system, neurons are produced more slowly, but specified and wired up in a proper configuration in the absence of any retinal signals.


Pragmatics ◽  
2017 ◽  
Vol 27 (4) ◽  
pp. 479-506 ◽  
Author(s):  
Binmei Liu

Abstract Previous studies have found that but and so occur frequently in native and non-native English speakers’ speech and that they are easy to acquire by non-native English speakers. The current study compared ideational and pragmatic functions of but and so by native and non-native speakers of English. Data for the study were gathered using individual sociolinguistic interviews with five native English speakers and ten L1 Chinese speakers. The results suggest that even though the Chinese speakers of English acquired the ideational functions of but and so as well as the native English speakers, they underused the pragmatic functions of them. The findings indicate that there is still a gap between native and non-native English speakers in communicative competence in the use of but and so. The present study also suggests that speakers’ L1 (Mandarin Chinese) and overall oral proficiency in oral discourse affect their use of but and so.


2021 ◽  
pp. 002383092199872
Author(s):  
Solène Inceoglu

The present study investigated native (L1) and non-native (L2) speakers’ perception of the French vowels /ɔ̃, ɑ̃, ɛ̃, o/. Thirty-four American-English learners of French and 33 native speakers of Parisian French were asked to identify 60 monosyllabic words produced by a native speaker in three modalities of presentation: auditory-only (A-only); audiovisual (AV); and visual-only (V-only). The L2 participants also completed a vocabulary knowledge test of the words presented in the perception experiment that aimed to explore whether subjective word familiarity affected speech perception. Results showed that overall performance was better in the AV and A-only conditions for the two groups with the pattern of confusion differing across modalities. The lack of audiovisual benefit was not due to the vowel contrasts being not visually salient enough, as shown by the native group’s performance in the V-only modality, but to the L2 group’s weaker sensitivity to visual information. Additionally, a significant relationship was found between subjective word familiarity and AV and A-only (but not V-only) perception of non-native contrasts.


Sign in / Sign up

Export Citation Format

Share Document