scholarly journals Shifting Perceptual Weights in L2 Vowel Identification after Training

PLoS ONE ◽  
2016 ◽  
Vol 11 (9) ◽  
pp. e0162876 ◽  
Author(s):  
Wei Hu ◽  
Lin Mi ◽  
Zhen Yang ◽  
Sha Tao ◽  
Mingshuang Li ◽  
...  
2019 ◽  
Vol 62 (12) ◽  
pp. 4534-4543
Author(s):  
Wei Hu ◽  
Sha Tao ◽  
Mingshuang Li ◽  
Chang Liu

Purpose The purpose of this study was to investigate how the distinctive establishment of 2nd language (L2) vowel categories (e.g., how distinctively an L2 vowel is established from nearby L2 vowels and from the native language counterpart in the 1st formant [F1] × 2nd formant [F2] vowel space) affected L2 vowel perception. Method Identification of 12 natural English monophthongs, and categorization and rating of synthetic English vowels /i/ and /ɪ/ in the F1 × F2 space were measured for Chinese-native (CN) and English-native (EN) listeners. CN listeners were also examined with categorization and rating of Chinese vowels in the F1 × F2 space. Results As expected, EN listeners significantly outperformed CN listeners in English vowel identification. Whereas EN listeners showed distinctive establishment of 2 English vowels, CN listeners had multiple patterns of L2 vowel establishment: both, 1, or neither established. Moreover, CN listeners' English vowel perception was significantly related to the perceptual distance between the English vowel and its Chinese counterpart, and the perceptual distance between the adjacent English vowels. Conclusions L2 vowel perception relied on listeners' capacity to distinctively establish L2 vowel categories that were distant from the nearby L2 vowels.


2016 ◽  
Vol 13 (118) ◽  
pp. 20160057 ◽  
Author(s):  
Erin E. Sutton ◽  
Alican Demir ◽  
Sarah A. Stamper ◽  
Eric S. Fortune ◽  
Noah J. Cowan

Animal nervous systems resolve sensory conflict for the control of movement. For example, the glass knifefish, Eigenmannia virescens , relies on visual and electrosensory feedback as it swims to maintain position within a moving refuge. To study how signals from these two parallel sensory streams are used in refuge tracking, we constructed a novel augmented reality apparatus that enables the independent manipulation of visual and electrosensory cues to freely swimming fish ( n = 5). We evaluated the linearity of multisensory integration, the change to the relative perceptual weights given to vision and electrosense in relation to sensory salience, and the effect of the magnitude of sensory conflict on sensorimotor gain. First, we found that tracking behaviour obeys superposition of the sensory inputs, suggesting linear sensorimotor integration. In addition, fish rely more on vision when electrosensory salience is reduced, suggesting that fish dynamically alter sensorimotor gains in a manner consistent with Bayesian integration. However, the magnitude of sensory conflict did not significantly affect sensorimotor gain. These studies lay the theoretical and experimental groundwork for future work investigating multisensory control of locomotion.


2017 ◽  
Vol 60 (9) ◽  
pp. 2537-2550 ◽  
Author(s):  
Irena Vincent

Purpose Research on language planning in adult stuttering is relatively sparse and offers diverging arguments about a potential causative relationship between semantic and phonological encoding and fluency breakdowns. This study further investigated semantic and phonological encoding efficiency in adults who stutter (AWS) by means of silent category and phoneme identification, respectively. Method Fifteen AWS and 15 age- and sex-matched adults who do not stutter (ANS) participated. The groups were compared on the basis of the accuracy and speed of superordinate category (animal vs. object) and initial phoneme (vowel vs. consonant) decisions, which were indicated manually during silent viewing of pictorial stimuli. Movement execution latency was accounted for, and no other cognitive, linguistic, or motor demands were posed on participants' responses. Therefore, category identification accuracy and speed were considered indirect measures of semantic encoding efficiency and phoneme identification accuracy and speed of phonological encoding efficiency. Results For category decisions, AWS were slower but not less accurate than ANS, with objects eliciting more errors and slower responses than animals in both groups. For phoneme decisions, the groups did not differ in accuracy, with consonant errors outnumbering vowel errors in both groups, and AWS were slower than ANS in consonant but not vowel identification, with consonant response time lagging behind vowel response time in AWS only. Conclusions AWS were less efficient than ANS in semantic encoding, and they might harbor a consonant-specific phonological encoding weakness. Future independent studies are warranted to discover if these positive findings are replicable and a marker for persistent stuttering.


2018 ◽  
Vol 104 (5) ◽  
pp. 922-925
Author(s):  
Samuel S. Smith ◽  
Ananthakrishna Chintanpalli ◽  
Michael G. Heinz ◽  
Christian J. Sumner

2013 ◽  
Vol 133 (5) ◽  
pp. EL391-EL397 ◽  
Author(s):  
Lin Mi ◽  
Sha Tao ◽  
Wenjing Wang ◽  
Qi Dong ◽  
Su-Hyun Jin ◽  
...  
Keyword(s):  

1997 ◽  
Vol 40 (6) ◽  
pp. 1434-1444 ◽  
Author(s):  
Kathryn Hoberg Arehart ◽  
Catherine Arriaga King ◽  
Kelly S. McLean-Mudgett

This study compared the ability of listeners with normal hearing and listeners with moderate to moderately-severe sensorineural hearing loss to use fundamental frequency differences (ΔF 0 ) in the identification of monotically presented simultaneous vowels. Two psychophysical procedures, double vowel identification and masked vowel identification, were used to measure identification performance as a function of ΔF 0 (0 through 8 semitones) between simultaneous vowels. Performance in the double vowel identification task was measured by the percentage of trials in which listeners correctly identified both vowels in a double vowel. The masked vowel identification task yielded thresholds representing signal-to-noise ratios at which listeners could just identify target vowels in the presence of a masking vowel. In the double vowel identification task, both listeners with normal hearing and listeners with hearing loss showed significant ΔF 0 benefit: Between 0 and 2 semitones, listeners with normal hearing showed an 18.5% average increase in performance; listeners with hearing loss showed a 16.5% average increase. In the masked vowel identification task, both groups showed significant ΔF 0 benefit. However, the mean benefit associated with ΔF 0 differences in the masked vowel task was more than twice as large in listeners with normal hearing 9.4 dB) when compared to listeners with hearing loss (4.4 dB), suggesting less ΔF 0 benefit in listeners with hearing loss. In both tasks, overall performance of listeners with hearing loss was significantly worse than performance of listeners with normal hearing. Possible reasons for reduced ΔF 0 benefit and decreased overall performance in listeners with hearing loss include reduced audibility of vowel sounds and deficits in spectro-temporal processing.


Sign in / Sign up

Export Citation Format

Share Document