Use of a phoneme monitoring task to examine lexical access in adults who do and do not stutter

2018 ◽  
Vol 57 ◽  
pp. 65-73 ◽  
Author(s):  
Timothy A. Howell ◽  
Nan Bernstein Ratner
2011 ◽  
Vol 15 (1) ◽  
pp. 173-180 ◽  
Author(s):  
JIHYE MOON ◽  
NAN JIANG

Lexical access in bilinguals is known to be largely non-selective. However, most studies in this area have involved bilinguals whose two languages share the same script. This study aimed to examine bilingual lexical access among bilinguals whose two languages have distinct scripts. Korean–English bilinguals were tested in a phoneme monitoring task in their first or second language. The results showed a simultaneous activation of the non-target language in a monolingual task, suggesting non-selective lexical access even among bilinguals whose two languages do not share the same script. Language dominance did not affect the pattern of results.


1991 ◽  
Vol 89 (4B) ◽  
pp. 2010-2011
Author(s):  
Scott E. Lively ◽  
David B. Pisoni

2019 ◽  
Vol 48 (6) ◽  
pp. 836-845
Author(s):  
Lisa Thorpe ◽  
Margaret Cousins ◽  
Ros Bramwell

The phoneme monitoring task is a musical priming paradigm that demonstrates that both musicians and non-musicians have gained implicit understanding of prevalent harmonic structures. Little research has focused on implicit music learning in musicians and non-musicians. This current study aimed to investigate whether the phoneme monitoring task would identify any implicit memory differences between musicians and non-musicians. It focuses on both implicit knowledge of musical structure and implicit memory for specific musical sequences. Thirty-two musicians and non-musicians (19 female and 13 male) were asked to listen to a seven-chord sequence and decide as quickly as possible whether the final chord ended on the syllable /di/ or /du/. Overall, musicians were faster at the task, though non-musicians made more gains through the blocks of trials. Implicit memory for musical sequence was evident in both musicians and non-musicians. Both groups of participants reacted quicker to sequences that they had heard more than once but showed no explicit knowledge of the familiar sequences.


2012 ◽  
Vol 36 (6) ◽  
pp. 457-467 ◽  
Author(s):  
Mathilde Fort ◽  
Elsa Spinelli ◽  
Christophe Savariaux ◽  
Sonia Kandel

The goal of this study was to explore whether viewing the speaker’s articulatory gestures contributes to lexical access in children (ages 5–10) and in adults. We conducted a vowel monitoring task with words and pseudo-words in audio-only (AO) and audiovisual (AV) contexts with white noise masking the acoustic signal. The results indicated that children clearly benefited from visual speech from age 6–7 onwards. However, unlike adults, the word superiority effect was not greater in the AV than the AO condition in children, suggesting that visual speech mostly contributes to phonemic—rather than lexical—processing during childhood, at least until the age of 10.


1999 ◽  
Vol 27 (3) ◽  
pp. 413-421 ◽  
Author(s):  
Jean Vroomen ◽  
Beatrice De Gelder

2020 ◽  
Vol 41 (4) ◽  
pp. 933-961
Author(s):  
Rebecca Holt ◽  
Laurence Bruggeman ◽  
Katherine Demuth

AbstractProcessing speech can be slow and effortful for children, especially in adverse listening conditions, such as the classroom. This can have detrimental effects on children’s academic achievement. We therefore asked whether primary school children’s speech processing could be made faster and less effortful via the presentation of visual speech cues (speaker’s facial movements), and whether any audio-visual benefit would be modulated by the presence of noise or by characteristics of individual participants. A phoneme monitoring task with concurrent pupillometry was used to measure 7- to 11-year-old children’s speech processing speed and effort, with and without visual cues, in both quiet and noise. Results demonstrated that visual cues to speech can facilitate children’s speech processing, but that these benefits may also be subject to variability according to children’s motivation. Children showed faster processing and reduced effort when visual cues were available, regardless of listening condition. However, examination of individual variability revealed that the reduction in effort was driven by the children who performed better on a measure of phoneme isolation (used to quantify how difficult they found the phoneme monitoring task).


1981 ◽  
Vol 72 (4) ◽  
pp. 471-477 ◽  
Author(s):  
J. Segui ◽  
U. Frauenfelder ◽  
J. Mehler

2019 ◽  
Vol 63 (1) ◽  
pp. 3-30
Author(s):  
Odette Scharenborg ◽  
Sofoklis Kakouros ◽  
Brechtje Post ◽  
Fanny Meunier

This paper investigates whether sentence accent detection in a non-native language is dependent on (relative) similarity between prosodic cues to accent between the non-native and the native language, and whether cross-linguistic differences in the use of local and more widely distributed (i.e., non-local) cues to sentence accent detection lead to differential effects of the presence of background noise on sentence accent detection in a non-native language. We compared Dutch, Finnish, and French non-native listeners of English, whose cueing and use of prosodic prominence is gradually further removed from English, and compared their results on a phoneme monitoring task in different levels of noise and a quiet condition to those of native listeners. Overall phoneme detection performance was high for the native and the non-native listeners, but deteriorated to the same extent in the presence of background noise. Crucially, relative similarity between the prosodic cues to sentence accent of one’s native language compared to that of a non-native language does not determine the ability to perceive and use sentence accent for speech perception in that non-native language. Moreover, proficiency in the non-native language is not a straightforward predictor of sentence accent perception performance, although high proficiency in a non-native language can seemingly overcome certain differences at the prosodic level between the native and non-native language. Instead, performance is determined by the extent to which listeners rely on local cues (English and Dutch) versus cues that are more distributed (Finnish and French), as more distributed cues survive the presence of background noise better.


2000 ◽  
Vol 23 (3) ◽  
pp. 349-350
Author(s):  
Jean Vroomen ◽  
Beatrice de Gelder

Norris, McQueen & Cutler present a detailed account of the decision stage of the phoneme monitoring task. However, we question whether this contributes to our understanding of the speech recognition process itself, and we fail to see why phonotactic knowledge is playing a role in phoneme recognition.


2017 ◽  
Vol 60 (10) ◽  
pp. 2792-2807 ◽  
Author(s):  
Jayanthi Sasisekaran ◽  
Shriya Basu

Purpose The aim of the present study was to investigate dual-task performance in children who stutter (CWS) and those who do not to investigate if the groups differed in the ability to attend and allocate cognitive resources effectively during task performance. Method Participants were 24 children (12 CWS) in both groups matched for age and sex. For the primary task, participants performed a phoneme monitoring in a picture–written word interference task. For the secondary task, participants made pitch judgments on tones presented at varying (short, long) stimulus onset asynchrony (SOA) from the onset of the picture. Results The CWS were comparable to the children who do not stutter in performing the monitoring task although the SOA-based performance differences in this task were more variable in the CWS. The CWS were also significantly slower in making tone decisions at the short SOA and showed a trend for making more errors in this task. Conclusions The findings are interpreted to suggest higher dual-task cost effects in CWS. A potential explanation for this finding requiring further testing and confirmation is that the CWS show reduced efficiency in attending to the tone stimuli while simultaneously prioritizing attention to the phoneme-monitoring task.


Sign in / Sign up

Export Citation Format

Share Document