monosyllabic words
Recently Published Documents


TOTAL DOCUMENTS

221
(FIVE YEARS 58)

H-INDEX

27
(FIVE YEARS 2)

Author(s):  
Chan Huey Jien

Cantonese is widely spoken among the Malaysian Chinese community. Cantonese speakers are not only native speakers, but also non-native speakers. One of the difficult parts of Cantonese learning is lexical tones. In view of this, this study provides an acoustic analysis of Cantonese lexical tones produced by Chinese youths in Seremban, Negeri Sembilan. This study investigates the acoustic characteristics of Cantonese lexical tones by analysing the length features and pitch features of monosyllabic words. Six female speakers participated in this study. Three of them are native Cantonese speakers, while the other three are non-native Cantonese speakers. Data analysis was conducted by using Praat. In terms of length features, T2 and T6 are the shortest smooth tones, and T7 is the shortest checked tone. In terms of pitch features, T3 and T4 had greater changes compared to the previous study. All lexical tones produced by non-native speakers, with the exception of T2, are level tones. Moreover, in both groups, the vowel duration and pitch value of T2 are relatively the same as T6, and there is a trend of combination.


2021 ◽  
pp. 1-38
Author(s):  
Erik Witte ◽  
Jonas Ekeroot ◽  
Susanne Köbler

Abstract The speech perception ability of people with hearing loss can be efficiently measured using phonemic-level scoring. We aimed to develop linguistic stimuli suitable for a closed-set phonemic discrimination test in the Swedish language called the Situated Phoneme (SiP) test. The SiP test stimuli that we developed consisted of real monosyllabic words with minimal phonemic contrast, realised by phonetically similar phones. The lexical and sublexical factors of word frequency, phonological neighbourhood density, phonotactic probability, and orthographic transparency were similar between all contrasting words. Each test word was recorded five times by two different speakers, including one male and one female. The accuracy of the test-word recordings was evaluated by 28 normal-hearing subjects in a listening experiment with a silent background using a closed-set design. With a few exceptions, all test words could be correctly discriminated. We discuss the results in terms of content- and construct-validity implications for the Swedish SiP test.


2021 ◽  
Author(s):  
Changfu Pei ◽  
Yuan Qiu ◽  
Fali Li ◽  
Xunan Huang ◽  
Yajing Si ◽  
...  

Human linguistic units are hierarchical, and our brain responds differently when processing linguistic units during sentence comprehension, especially when the modality of the received signal is different (auditory, visual, or audio-visual). However, it is unclear how the brain processes and integrates language information at different linguistic units (words, phrases, and sentences) provided simultaneously in audio and visual modalities. To address the issue, we presented participants with sequences of short Chinese sentences through auditory or visual or combined audio-visual modalities, while electroencephalographic responses were recorded. With a frequency tagging approach, we analyzed the neural representations of basic linguistic units (i.e., characters/monosyllabic words) and higher-level linguistic structures (i.e., phrases and sentences) across the three modalities separately. We found that audio-visual integration occurs at all linguistic units, and the brain areas involved in the integration varied across different linguistic levels. In particular, the integration of sentences activated the local left prefrontal area. Therefore, we used continuous theta-burst stimulation (cTBS) to verify that the left prefrontal cortex plays a vital role in the audio-visual integration of sentence information. Our findings suggest the advantage of bimodal language comprehension at hierarchical stages in language-related information processing and provide evidence for the causal role of the left prefrontal regions in processing information of audio-visual sentences.


2021 ◽  
Vol 42 (04) ◽  
pp. 331-341
Author(s):  
Teresa A. Zwolan ◽  
Gregory Basura

AbstractThe safety, efficacy, and success of cochlear implants (CIs) are well established and have led to changes in criteria used by clinicians to determine who should receive a CI. Such changes in clinical decision-making have out-paced the slower-occurring changes that have taken place with regulatory bodies' and insurers' indications. We review the historical development of indications for CIs, including those of the U.S. Food and Drug Administration (FDA), Medicare, Medicaid, and private insurers. We report on expansion to include patients with greater residual hearing, such as those who receive Hybrid and EAS devices, and report on recent FDA approvals that place less emphasis on the patient's best-aided condition and greater emphasis on the ear to be treated. This includes expansion of CIs to patients with single-side deafness and asymmetric hearing loss. We review changes in the test materials used to determine candidacy, including transition from sentences in quiet to sentences in noise to the recent use of monosyllabic words and cognitive screening measures. Importantly, we discuss the recent trend to recommend CIs despite a patient not meeting FDA or insurers' indications (a practice known as “off-label”), which serves as attestation that current indications need to be updated.


2021 ◽  
pp. 002383092199872
Author(s):  
Solène Inceoglu

The present study investigated native (L1) and non-native (L2) speakers’ perception of the French vowels /ɔ̃, ɑ̃, ɛ̃, o/. Thirty-four American-English learners of French and 33 native speakers of Parisian French were asked to identify 60 monosyllabic words produced by a native speaker in three modalities of presentation: auditory-only (A-only); audiovisual (AV); and visual-only (V-only). The L2 participants also completed a vocabulary knowledge test of the words presented in the perception experiment that aimed to explore whether subjective word familiarity affected speech perception. Results showed that overall performance was better in the AV and A-only conditions for the two groups with the pattern of confusion differing across modalities. The lack of audiovisual benefit was not due to the vowel contrasts being not visually salient enough, as shown by the native group’s performance in the V-only modality, but to the L2 group’s weaker sensitivity to visual information. Additionally, a significant relationship was found between subjective word familiarity and AV and A-only (but not V-only) perception of non-native contrasts.


2021 ◽  
Vol 6 ◽  
Author(s):  
Wei Zhang ◽  
John M. Levis

Southwestern Mandarin is one of the most important modern Chinese dialects, with over 270 million speakers. One of its most noticeable phonological features is an inconsistent distinction between the pronunciation of (n) and (l), a feature shared with Cantonese. However, while /n/-/l/ in Cantonese has been studied extensively, especially in its effect upon English pronunciation, the /l/-/n/ distinction has not been widely studied for Southwestern Mandarin speakers. Many speakers of Southwestern Mandarin learn Standard Mandarin as a second language when they begin formal schooling, and English as a third language later. Their lack of /l/-/n/ distinction is largely a marker of regional accent. In English, however, the lack of a distinction risks loss of intelligibility because of the high functional load of /l/-/n/. This study is a phonetic investigation of initial and medial (n) and (l) production in English and Standard Mandarin by speakers of Southwestern Mandarin. Our goal is to identify how Southwestern Mandarin speakers produce (n) and (l) in their additional languages, thus providing evidence for variations within Southwestern Mandarin and identifying likely difficulties for L2 learning. Twenty-five Southwestern Mandarin speakers recorded English words with word initial (n) and (l), medial <ll> or <nn> spellings (e.g., swallow, winner), and word-medial (nl) combinations (e.g., only) and (ln) combinations (e.g., walnut). They also read Standard Mandarin monosyllabic words with initial (l) and (n), and Standard Mandarin disyllabic words with (l) or (n). Of the 25 subjects, 18 showed difficulties producing (n) and (l) consistently where required, while seven (all part of the same regional variety) showed no such difficulty. The results indicate that SWM speakers had more difficulty with initial nasal sounds in Standard Mandarin, which was similar to their performance in producing Standard Mandarin monosyllabic words. For English, production of (l) was significantly less accurate than (n), and (l) production in English was significantly worse than in Standard Mandarin. When both sounds occurred next to each other, there was a tendency toward producing only one sound, suggesting that the speakers assimilated production toward one phonological target. The results suggest that L1 influence may differ for the L2 and L3.


2021 ◽  
Vol 11 (7) ◽  
pp. 922
Author(s):  
Audrey Mazur-Palandre ◽  
Matthieu Quignard ◽  
Agnès Witko

The main goal of this paper is to analyze written texts produced by monolingual French university students, with and without dyslexia. More specifically, we were interested in the linguistic characteristics of the words used during a written production and of the type of word errors. Previous studies showed that students with dyslexia have difficulties in written production, whether in terms of the number of spelling errors, some syntactic aspects, identification of errors, confusion of monosyllabic words, omissions of words in sentences, or utilization of unexpected or inappropriate vocabulary. For this present study, students with dyslexia and control students were asked to produce written and spoken narrative and expository texts. The written texts (N = 86) were collected using Eye and Pen© software with digitizing tablets. Results reveal that students with dyslexia do not censor themselves as regards the choice of words in their written productions. They use the same types of words as the control students. Nevertheless, they make many more errors than the control students on all types of words, regardless of their linguistic characteristics (length, frequency, grammatical classes, etc.). Finally, these quantitative analyses help to target a rather unexpected subset of errors: short words, and in particular determiners and prepositions.


2021 ◽  
Vol 13 ◽  
Author(s):  
Gina Na ◽  
Sang Hyun Kwak ◽  
Seung Hyun Jang ◽  
Hye Eun Noh ◽  
Jungghi Kim ◽  
...  

To investigate the effect of choline alfoscerate (CA) on hearing amplification in patients with age related hearing loss, we performed a prospective case-control observational study from March 2016 to September 2020. We assessed patients with bilateral word recognition score (WRS) <50% using monosyllabic words. The patients were 65–85 years old, without any history of dementia, Alzheimer’s disease, parkinsonism, or depression. After enrollment, all patients started using hearing aids (HA). The CA group received a daily dose of 800 mg CA for 11 months. We performed between-group comparisons of audiological data, including pure tone audiometry, WRS, HA fitting data obtained using real-ear measurement (REM), and the Abbreviated Profile of Hearing Aid benefit scores after treatment. After CA administration, the WRS improved significantly in the CA group (4.2 ± 8.3%), but deteriorated in the control group (−0.6 ± 8.1%, p = 0.035). However, there was no significant between-group difference in the change in pure tone thresholds and aided speech intelligibility index calculated from REM. These findings suggest that the difference in WRS was relevant to central speech understanding rather than peripheral audibility. Therefore, administering oral CA could effectively enrich listening comprehension in older HA users.


Sign in / Sign up

Export Citation Format

Share Document