word onset
Recently Published Documents


TOTAL DOCUMENTS

59
(FIVE YEARS 22)

H-INDEX

16
(FIVE YEARS 2)

2021 ◽  
pp. 002383092110460
Author(s):  
Martin Ho Kwan Ip ◽  
Anne Cutler

Many different prosodic cues can help listeners predict upcoming speech. However, no research to date has assessed listeners’ processing of preceding prosody from different speakers. The present experiments examine (1) whether individual speakers (of the same language variety) are likely to vary in their production of preceding prosody; (2) to the extent that there is talker variability, whether listeners are flexible enough to use any prosodic cues signaled by the individual speaker; and (3) whether types of prosodic cues (e.g., F0 versus duration) vary in informativeness. Using a phoneme-detection task, we examined whether listeners can entrain to different combinations of preceding prosodic cues to predict where focus will fall in an utterance. We used unsynthesized sentences recorded by four female native speakers of Australian English who happened to have used different preceding cues to produce sentences with prosodic focus: a combination of pre-focus overall duration cues, F0 and intensity (mean, maximum, range), and longer pre-target interval before the focused word onset (Speaker 1), only mean F0 cues, mean and maximum intensity, and longer pre-target interval (Speaker 2), only pre-target interval duration (Speaker 3), and only pre-focus overall duration and maximum intensity (Speaker 4). Listeners could entrain to almost every speaker’s cues (the exception being Speaker 4’s use of only pre-focus overall duration and maximum intensity), and could use whatever cues were available even when one of the cue sources was rendered uninformative. Our findings demonstrate both speaker variability and listener flexibility in the processing of prosodic focus.


2021 ◽  
Author(s):  
Mara De Rosa ◽  
Davide Crepaldi

Reading requires the successful encoding of letter identity and position within a visual display, a process that relies on both visual and linguistic resources. In a series of experiments, we investigate whether readers’ lifelong experience with letter co-occurrence regularities supports letter processing. Skilled readers were briefly exposed to strings of five consonants; critically, letters in position 2 and 4 were embedded in either high (B in MBL) or low (PBG) transitional probability (TP) triplets. When presented with two strings differing by the critical letter (e.g., MBLSD vs. MCLSD), participants correctly identified the right option more often in high-TP than low-TP contexts, regardless of position. Experiment II featured both a Same-Different and a Reicher-Wheeler task with response time constraints, and further qualified the contextual facilitation effect, with high-TP eliciting faster ‘same’ judgements only for letters in position 2. In a third experiment, context had no effect on Same-Different matchings with strings of pseudo-characters sharing letter low-level visual features. Our results indicate that co-occurrence statistics affect letter recognition in tasks that emphasize whole-string processing. This effect is genuinely orthographic, as it is conditional on intact letter identities, and with increasing task demands it only surfaces for letters close to word onset.


Author(s):  
Aleksandra Tomić ◽  
Jorge R. Valdés Kroff

Abstract Despite its prominent use among bilinguals, psycholinguistic studies reported code-switch processing costs (e.g., Meuter & Allport, 1999). This paradox may partly be due to the focus on the code-switch itself instead of its potential subsequent benefits. Motivated by corpus studies on CS patterns and sociopragmatic functions of CS, we asked whether bilinguals use code-switches as a cue to the lexical characteristics of upcoming speech. We report a visual world study testing whether code-switching facilitates the anticipation of lower-frequency words. Results confirm that US Spanish–English bilinguals (n = 30) use minority (Spanish) to majority (English) language code-switches in real-time language processing as a cue that a less frequent word would ensue, as indexed by increased looks at images representing lower- vs. higher-frequency words in the code-switched condition, prior to the target word onset. These results highlight the need to further integrate sociolinguistic and corpus observations into the experimental study of code-switching.


2021 ◽  
Vol 11 (7) ◽  
pp. 898
Author(s):  
Nadezhda Mkrtychian ◽  
Daria Gnedykh ◽  
Evgeny Blagovechtchenski ◽  
Diana Tsvetova ◽  
Svetlana Kostromina ◽  
...  

Abstract and concrete words differ in their cognitive and neuronal underpinnings, but the exact mechanisms underlying these distinctions are unclear. We investigated differences between these two semantic types by analysing brain responses to newly learnt words with fully controlled psycholinguistic properties. Experimental participants learned 20 novel abstract and concrete words in the context of short stories. After the learning session, event-related potentials (ERPs) to newly learned items were recorded, and acquisition outcomes were assessed behaviourally in a range of lexical and semantic tasks. Behavioural results showed better performance on newly learnt abstract words in lexical tasks, whereas semantic assessments showed a tendency for higher accuracy for concrete words. ERPs to novel abstract and concrete concepts differed early on, ~150 ms after the word onset. Moreover, differences between novel words and control untrained pseudowords were observed earlier for concrete (~150 ms) than for abstract (~200 ms) words. Distributed source analysis indicated bilateral temporo-parietal activation underpinning newly established memory traces, suggesting a crucial role of Wernicke’s area and its right-hemispheric homologue in word acquisition. In sum, we report behavioural and neurophysiological processing differences between concrete and abstract words evident immediately after their controlled acquisition, confirming distinct neurocognitive mechanisms underpinning these types of semantics.


2021 ◽  
Author(s):  
K. Segaert ◽  
C. Poulisse ◽  
R. Markiewicz ◽  
L. Wheeldon ◽  
D. Marchment ◽  
...  

AbstractMild cognitive impairment (MCI) is the term used to identify those individuals with subjective and objective cognitive decline but with preserved activities of daily living and an absence of dementia. While MCI can impact functioning in different cognitive domains, most notably episodic memory, relatively little is known about the comprehension of language in MCI. In this study we used around-the-ear electrodes (cEEGrids) to identify impairments during language comprehension in MCI patients. In a group of 23 MCI patients and 23 age-matched controls, language comprehension was tested in a two-word phrase paradigm. We examined the oscillatory changes following word onset as a function of lexical retrieval (e.g. swrfeq versus swift) and semantic binding (e.g. horse preceded by swift versus preceded by swrfeq). Electrophysiological signatures (as measured by the cEEGrids) were significantly different between MCI patients and controls. In controls lexical retrieval was associated with a rebound in the alpha/beta range and semantic binding was associated with a post-word alpha/beta suppression. In contrast, both the lexical retrieval and semantic binding signatures were absent in the MCI group. The signatures observed using cEEGrids in controls were comparable to those signatures obtained with a full-cap EEG set-up. Importantly, our findings suggest that MCI patients have impaired electrophysiological signatures for comprehending single-words and multi-word phrases. Moreover, cEEGrids set-ups provide a non-invasive and sensitive clinical tool for detecting early impairments in language comprehension in MCI.


2021 ◽  
Author(s):  
Florian Hintz ◽  
Cesko Voeten ◽  
James McQueen ◽  
Odette Scharenborg

Using the visual-word paradigm, the present study investigated the effects of word onset and offset masking on the time course of non-native spoken-word recognition in the presence of background noise. In two experiments, Dutch non-native listeners heard English target words, preceded by carrier sentences that were noise-free (Experiment 1) or contained intermittent noise (Experiment 2). Target words were either onset- or offset-masked or not masked at all. Results showed that onset masking delayed target word recognition more than offset masking did, suggesting that – similar to natives – non-native listeners strongly rely on word onset information during word recognition in noise.


2021 ◽  
pp. 1-19
Author(s):  
Julien MILLASSEAU ◽  
Ivan YUEN ◽  
Laurence BRUGGEMAN ◽  
Katherine DEMUTH

Abstract While voicing contrasts in word-onset position are acquired relatively early, much less is known about how and when they are acquired in word-coda position, where accurate production of these contrasts is also critical for distinguishing words (e.g., do g vs. do ck ). This study examined how the acoustic cues to coda voicing contrasts are realized in the speech of 4-year-old Australian English-speaking children. The results showed that children used similar acoustic cues to those of adults, including longer vowel duration and more frequent voice bar for voiced stops, and longer closure and burst durations for voiceless stops along with more frequent irregular pitch periods. This suggests that 4-year-olds have acquired productive use of the acoustic cues to coda voicing contrasts, though implementations are not yet fully adult-like. The findings have implications for understanding the development of phonological contrasts in populations for whom these may be challenging, such as children with hearing loss.


2021 ◽  
pp. 002383092097901
Author(s):  
Katja Stärk ◽  
Evan Kidd ◽  
Rebecca L. A. Frost

To acquire language, infants must learn to segment words from running speech. A significant body of experimental research shows that infants use multiple cues to do so; however, little research has comprehensively examined the distribution of such cues in naturalistic speech. We conducted a comprehensive corpus analysis of German child-directed speech (CDS) using data from the Child Language Data Exchange System (CHILDES) database, investigating the availability of word stress, transitional probabilities (TPs), and lexical and sublexical frequencies as potential cues for word segmentation. Seven hours of data (~15,000 words) were coded, representing around an average day of speech to infants. The analysis revealed that for 97% of words, primary stress was carried by the initial syllable, implicating stress as a reliable cue to word onset in German CDS. Word identity was also marked by TPs between syllables, which were higher within than between words, and higher for backwards than forwards transitions. Words followed a Zipfian-like frequency distribution, and over two-thirds of words (78%) were monosyllabic. Of the 50 most frequent words, 82% were function words, which accounted for 47% of word tokens in the entire corpus. Finally, 15% of all utterances comprised single words. These results give rich novel insights into the availability of segmentation cues in German CDS, and support the possibility that infants draw on multiple converging cues to segment their input. The data, which we make openly available to the research community, will help guide future experimental investigations on this topic.


2020 ◽  
Author(s):  
Sanne Ten Oever ◽  
Andrea E. Martin

AbstractNeuronal oscillations putatively track speech in order to optimize sensory processing. However, it is unclear how isochronous brain oscillations can track pseudo-rhythmic speech input. Here we investigate how top-down predictions flowing from internal language models interact with oscillations during speech processing. We show that word-to-word onset delays are shorter when words are spoken in predictable contexts. A computational model including oscillations, feedback, and inhibition is able to track the natural pseudo-rhythmic word-to-word onset differences. As the model processes, it generates temporal phase codes, which are a candidate mechanism for carrying information forward in time in the system. Intriguingly, the model’s response is more rhythmic for non-isochronous compared to isochronous speech when onset times are proportional to predictions from the internal model. These results show that oscillatory tracking of temporal speech dynamics relies not only on the input acoustics, but also on the linguistic constraints flowing from knowledge of language.


Author(s):  
Seema Prasad ◽  
Ramesh Kumar Mishra

Abstract Does a concurrent verbal working memory (WM) load constrain cross-linguistic activation? In a visual world study, participants listened to Hindi (L1) or English (L2) spoken words and viewed a display containing the phonological cohort of the translation equivalent (TE cohort) of the spoken word and 3 distractors. Experiment 1 was administered without a load. Participants then maintained two or four letters (Experiment 2) or two, six or eight letters (Experiment 3) in WM and were tested on backward sequence recognition after the visual world display. Greater looks towards TE cohorts were observed in both the language directions in Experiment 1. With a load, TE cohort activation was inhibited in the L2 – L1 direction and observed only in the early stages after word onset in the L1 – L2 direction suggesting a critical role of language direction. These results indicate that cross-linguistic activation as seen through eye movements depends on cognitive resources such as WM.


Sign in / Sign up

Export Citation Format

Share Document