phoneme monitoring
Recently Published Documents


TOTAL DOCUMENTS

39
(FIVE YEARS 4)

H-INDEX

15
(FIVE YEARS 0)

2020 ◽  
Vol 41 (4) ◽  
pp. 933-961
Author(s):  
Rebecca Holt ◽  
Laurence Bruggeman ◽  
Katherine Demuth

AbstractProcessing speech can be slow and effortful for children, especially in adverse listening conditions, such as the classroom. This can have detrimental effects on children’s academic achievement. We therefore asked whether primary school children’s speech processing could be made faster and less effortful via the presentation of visual speech cues (speaker’s facial movements), and whether any audio-visual benefit would be modulated by the presence of noise or by characteristics of individual participants. A phoneme monitoring task with concurrent pupillometry was used to measure 7- to 11-year-old children’s speech processing speed and effort, with and without visual cues, in both quiet and noise. Results demonstrated that visual cues to speech can facilitate children’s speech processing, but that these benefits may also be subject to variability according to children’s motivation. Children showed faster processing and reduced effort when visual cues were available, regardless of listening condition. However, examination of individual variability revealed that the reduction in effort was driven by the children who performed better on a measure of phoneme isolation (used to quantify how difficult they found the phoneme monitoring task).


2019 ◽  
Vol 48 (6) ◽  
pp. 836-845
Author(s):  
Lisa Thorpe ◽  
Margaret Cousins ◽  
Ros Bramwell

The phoneme monitoring task is a musical priming paradigm that demonstrates that both musicians and non-musicians have gained implicit understanding of prevalent harmonic structures. Little research has focused on implicit music learning in musicians and non-musicians. This current study aimed to investigate whether the phoneme monitoring task would identify any implicit memory differences between musicians and non-musicians. It focuses on both implicit knowledge of musical structure and implicit memory for specific musical sequences. Thirty-two musicians and non-musicians (19 female and 13 male) were asked to listen to a seven-chord sequence and decide as quickly as possible whether the final chord ended on the syllable /di/ or /du/. Overall, musicians were faster at the task, though non-musicians made more gains through the blocks of trials. Implicit memory for musical sequence was evident in both musicians and non-musicians. Both groups of participants reacted quicker to sequences that they had heard more than once but showed no explicit knowledge of the familiar sequences.


2019 ◽  
Vol 63 (1) ◽  
pp. 3-30
Author(s):  
Odette Scharenborg ◽  
Sofoklis Kakouros ◽  
Brechtje Post ◽  
Fanny Meunier

This paper investigates whether sentence accent detection in a non-native language is dependent on (relative) similarity between prosodic cues to accent between the non-native and the native language, and whether cross-linguistic differences in the use of local and more widely distributed (i.e., non-local) cues to sentence accent detection lead to differential effects of the presence of background noise on sentence accent detection in a non-native language. We compared Dutch, Finnish, and French non-native listeners of English, whose cueing and use of prosodic prominence is gradually further removed from English, and compared their results on a phoneme monitoring task in different levels of noise and a quiet condition to those of native listeners. Overall phoneme detection performance was high for the native and the non-native listeners, but deteriorated to the same extent in the presence of background noise. Crucially, relative similarity between the prosodic cues to sentence accent of one’s native language compared to that of a non-native language does not determine the ability to perceive and use sentence accent for speech perception in that non-native language. Moreover, proficiency in the non-native language is not a straightforward predictor of sentence accent perception performance, although high proficiency in a non-native language can seemingly overcome certain differences at the prosodic level between the native and non-native language. Instead, performance is determined by the extent to which listeners rely on local cues (English and Dutch) versus cues that are more distributed (Finnish and French), as more distributed cues survive the presence of background noise better.


2018 ◽  
Vol 13 (1) ◽  
pp. 38-73
Author(s):  
Clara Cohen ◽  
Shinae Kang

Abstract Pronunciation variation in many ways is systematic, yielding patterns that a canny listener can exploit in order to aid perception. This work asks whether listeners actually do draw upon these patterns during speech perception. We focus in particular on a phenomenon known as paradigmatic enhancement, in which suffixes are phonetically enhanced in verbs which are frequent in their inflectional paradigms. In a set of four experiments, we found that listeners do not seem to attend to paradigmatic enhancement patterns. They do, however, attend to the distributional properties of a verb’s inflectional paradigm when the experimental task encourages attention to sublexical detail, as is the case with phoneme monitoring (Experiment 1a–b). When tasks require more holistic lexical processing, as with lexical decision (Experiment 2), the effect of paradigmatic probability disappears. If stimuli are presented in full sentences, such that the surrounding context provides richer contextual and semantic information (Experiment 3), even otherwise robust influences like lexical frequency disappear. We propose that these findings are consistent with a perceptual system that is flexible, and devotes processing resources to exploiting only those patterns that provide a sufficient cognitive return on investment.


2018 ◽  
Vol 23 (2) ◽  
pp. 37-54
Author(s):  
Geoffrey A. Coalson ◽  
Courtney T. Byrd
Keyword(s):  

2017 ◽  
Vol 60 (10) ◽  
pp. 2792-2807 ◽  
Author(s):  
Jayanthi Sasisekaran ◽  
Shriya Basu

Purpose The aim of the present study was to investigate dual-task performance in children who stutter (CWS) and those who do not to investigate if the groups differed in the ability to attend and allocate cognitive resources effectively during task performance. Method Participants were 24 children (12 CWS) in both groups matched for age and sex. For the primary task, participants performed a phoneme monitoring in a picture–written word interference task. For the secondary task, participants made pitch judgments on tones presented at varying (short, long) stimulus onset asynchrony (SOA) from the onset of the picture. Results The CWS were comparable to the children who do not stutter in performing the monitoring task although the SOA-based performance differences in this task were more variable in the CWS. The CWS were also significantly slower in making tone decisions at the short SOA and showed a trend for making more errors in this task. Conclusions The findings are interpreted to suggest higher dual-task cost effects in CWS. A potential explanation for this finding requiring further testing and confirmation is that the CWS show reduced efficiency in attending to the tone stimuli while simultaneously prioritizing attention to the phoneme-monitoring task.


2015 ◽  
Vol 58 (3) ◽  
pp. 601-621 ◽  
Author(s):  
Geoffrey A. Coalson ◽  
Courtney T. Byrd

Purpose The purpose of this study was to explore metrical aspects of phonological encoding (i.e., stress and syllable boundary assignment) in adults who do and do not stutter (AWS and AWNS, respectively). Method Participants monitored nonwords for target sounds during silent phoneme monitoring tasks across two distinct experiments. For Experiment 1, the 22 participants (11 AWNS, 11 AWS) silently monitored target phonemes in nonwords with initial stress. For Experiment 2, an additional cohort of 22 participants (11 AWNS, 11 AWS) silently monitored phonemes in nonwords with noninitial stress. Results In Experiment 1, AWNS and AWS silently monitored target phonemes in initial stress stimuli with similar speed and accuracy. In Experiment 2, AWS demonstrated a within-group effect that was not present for AWNS. They required additional time when monitoring phonemes immediately following syllable boundary assignment in stimuli with noninitial stress. There was also a between-groups effect, with AWS exhibiting significantly greater errors identifying phonemes in nonwords with noninitial stress than AWNS. Conclusions Findings suggest metrical properties may affect the time course of phonological encoding in AWS in a manner distinct from AWNS. Specifically, in the absence of initial stress, metrical encoding of the syllable boundary may delay speech planning in AWS and contribute to breakdowns in fluent speech production.


Sign in / Sign up

Export Citation Format

Share Document