lexical segmentation
Recently Published Documents


TOTAL DOCUMENTS

39
(FIVE YEARS 3)

H-INDEX

11
(FIVE YEARS 1)

2020 ◽  
Vol 10 (4) ◽  
pp. 723-749
Author(s):  
Kriss Lange ◽  
Joshua Matthews

The capacity to perceive and meaningfully process foreign or second language (L2) words from the aural modality is a fundamentally important aspect of successful L2 listening. Despite this, the relationships between L2 listening and learners’ capacity to process aural input at the lexical level has received relatively little research focus. This study explores the relationships between measures of aural vocabulary, lexical segmentation and two measures of L2 listening comprehension (i.e., TOEIC & Eiken Pre-2) among a cohort of 130 tertiary level English as a foreign language (EFL) Japanese learners. Multiple regression modelling indicated that in combination, aural knowledge of vocabulary at the first 1,000-word level and lexical segmentation ability could predict 34% and 38% of total variance observed in TOEIC listening and Eiken Pre-2 listening scores respectively. The findings are used to provide some preliminary recommendations for building the capacity of EFL learners to process aural input at the lexical level.


2019 ◽  
Vol 62 (9) ◽  
pp. 3359-3366
Author(s):  
Yishan Jiao ◽  
Amy LaCross ◽  
Visar Berisha ◽  
Julie Liss

Purpose Subjective speech intelligibility assessment is often preferred over more objective approaches that rely on transcript scoring. This is, in part, because of the intensive manual labor associated with extracting objective metrics from transcribed speech. In this study, we propose an automated approach for scoring transcripts that provides a holistic and objective representation of intelligibility degradation stemming from both segmental and suprasegmental contributions, and that corresponds with human perception. Method Phrases produced by 73 speakers with dysarthria were orthographically transcribed by 819 listeners via Mechanical Turk, resulting in 63,840 phrase transcriptions. A protocol was developed to filter the transcripts, which were then automatically analyzed using novel algorithms developed for measuring phoneme and lexical segmentation errors. The results were compared with manual labels on a randomly selected sample set of 40 transcribed phrases to assess validity. A linear regression analysis was conducted to examine how well the automated metrics predict a perceptual rating of severity and word accuracy. Results On the sample set, the automated metrics achieved 0.90 correlation coefficients with manual labels on measuring phoneme errors, and 100% accuracy on identifying and coding lexical segmentation errors. Linear regression models found that the estimated metrics could predict a significant portion of the variance in perceptual severity and word accuracy. Conclusions The results show the promising development of an objective speech intelligibility assessment that identifies intelligibility degradation on multiple levels of analysis.


2018 ◽  
Author(s):  
Christian Brodbeck ◽  
L. Elliot Hong ◽  
Jonathan Z. Simon

SummaryDuring speech perception, a central task of the auditory cortex is to analyze complex acoustic patterns to allow detection of the words that encode a linguistic message. It is generally thought that this process includes at least one intermediate, phonetic, level of representations [1–6], localized bilaterally in the superior temporal lobe [7–10]. Phonetic representations reflect a transition from acoustic to linguistic information, classifying acoustic patterns into linguistically meaningful units, which can serve as input to mechanisms that access abstract word representations [11–13]. While recent research has identified neural signals arising from successful recognition of individual words in continuous speech [14–17], no explicit neurophysiological signal has been found demonstrating the transition from acoustic/phonetic to symbolic, lexical representations. Here we report a response reflecting the incremental integration of phonetic information for word identification, dominantly localized to the left temporal lobe. The short response latency, approximately 110 ms relative to phoneme onset, suggests that phonetic information is used for lexical processing as soon as it becomes available. Responses also tracked word boundaries, confirming previous reports of immediate lexical segmentation [18,19]. These new results were further investigated using a cocktail-party paradigm [20,21] in which participants listened to a mix of two talkers, attending to one and ignoring the other. Analysis indicates neural lexical processing of only the attended, but not the unattended, speech stream. Thus, while responses to acoustic features reflect attention through selective amplification of attended speech, responses consistent with a lexical processing model reveal categorically selective processing.


2017 ◽  
Vol 61 (1) ◽  
pp. 3-30 ◽  
Author(s):  
Odile Bagou ◽  
Ulrich Hans Frauenfelder

This study examines how French listeners segment and learn new words of artificial languages varying in the presence of different combinations of sublexical segmentation cues. The first experiment investigated the contribution of three different types of sublexical cues (acoustic-phonetic, phonological and prosodic cues) to word learning. The second experiment explored how participants specifically exploited sublexical prosodic cues. Whereas complementary cues signaling word-initial and word-final boundaries had synergistic effects on word learning in the first experiment, the two manipulated prosodic cues redundantly signaling word-final boundaries in the second experiment were rank-ordered with final pitch variations being more weighted than final lengthening. These results are discussed in light of the notions of cue type, cue position and cue efficiency.


2017 ◽  
Vol 44 (6) ◽  
pp. 1516-1538 ◽  
Author(s):  
NAOMI HAVRON ◽  
INBAL ARNON

AbstractCan emergent literacy impact the size of the linguistic units children attend to? We examined children's ability to segment multiword sequences before and after they learned to read, in order to disentangle the effect of literacy and age on segmentation. We found that early readers were better at segmenting multiword units (after controlling for age, cognitive, and linguistic variables), and that improvement in literacy skills between the two sessions predicted improvement in segmentation abilities. Together, these findings suggest that literacy acquisition, rather than age, enhanced segmentation. We discuss implications for models of language learning.


2016 ◽  
Vol 59 (6) ◽  
pp. 1505-1519 ◽  
Author(s):  
Michael F. Dorman ◽  
Julie Liss ◽  
Shuai Wang ◽  
Visar Berisha ◽  
Cimarron Ludwig ◽  
...  

Purpose Five experiments probed auditory-visual (AV) understanding of sentences by users of cochlear implants (CIs). Method Sentence material was presented in auditory (A), visual (V), and AV test conditions to listeners with normal hearing and CI users. Results (a) Most CI users report that most of the time, they have access to both A and V information when listening to speech. (b) CI users did not achieve better scores on a task of speechreading than did listeners with normal hearing. (c) Sentences that are easy to speechread provided 12 percentage points more gain to speech understanding than did sentences that were difficult. (d) Ease of speechreading for sentences is related to phrase familiarity. (e) Users of bimodal CIs benefit from low-frequency acoustic hearing even when V cues are available, and a second CI adds to the benefit of a single CI when V cues are available. (f) V information facilitates lexical segmentation by improving the recognition of the number of syllables produced and the relative strength of these syllables. Conclusions Our data are consistent with the view that V information improves CI users' ability to identify syllables in the acoustic stream and to recognize their relative juxtaposed strengths. Enhanced syllable resolution allows better identification of word onsets, which, when combined with place-of-articulation information from visible consonants, improves lexical access.


2016 ◽  
Vol 41 (7) ◽  
pp. 1988-2021 ◽  
Author(s):  
Çağrı Çöltekin
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document