speech monitoring
Recently Published Documents


TOTAL DOCUMENTS

44
(FIVE YEARS 9)

H-INDEX

11
(FIVE YEARS 0)

2021 ◽  
Vol 12 ◽  
Author(s):  
Ziedune Degutyte ◽  
Arlene Astell

Eye gaze plays an important role in communication but understanding of its actual function or functions and the methods used to elucidate this have varied considerably. This systematized review was undertaken to summarize both the proposed functions of eye gaze in conversations of healthy adults and the methodological approaches employed. The eligibility criteria were restricted to a healthy adult population and excluded studies that manipulated eye gaze behavior. A total of 29 articles—quantitative, qualitative and mixed methods were returned, with a wide range of methodological designs. The main areas of variability related to number of conversants, their familiarity and status, conversation topic, data collection tools—video and eye tracking—and definitions of eye gaze. The findings confirm that eye gaze facilitates turn yielding, plays a role in speech monitoring, prevents and repairs conversation breakdowns and facilitates intentional and unintentional speech interruptions. These findings were remarkably consistent given the variability in methods across the 29 articles. However, in relation to turn initiation, the results were less consistent, requiring further investigation. This review provides a starting point for future studies to make informed decisions about study methods for examining eye gaze and selecting variables of interest.


2020 ◽  
Vol 50 ◽  
Author(s):  
Szymon Bręński

Speech production is complex and high organized process. It starts with some intention, which in following steps is transformed into articulated and audible form. Sometimes speech production fails and results in a speech error (or a slip of the tongue), which changes meaning of the utterance and disturbs the process of the realization of intention. However, speech monitoring helps to detect and repair the error with respect to the original intention of the speaker. Thus the speaking appears as a way of the realization of intention and the intention plays integrational function in relation to the process of speaking. According to Frydrychowicz (1999), the process of the realization of intention can be divided into several phases, distinguished by psychophysical features of speaking. He found that voice intensity is highest when the speaker is close to fully realizing the intention. The aim of the current study is to examine this voice intensity effect in relation to speech error repairs as speaking units which re-establish the process of the realization of intention. The question is, howthese corrections of the course of speech are reflected in the voice intensity? The results obtained from errors and repairs induced in the dual task paradigm show that voice intensity rises when the speaker makes a self-repair by speaking a correct word.


2020 ◽  
Author(s):  
Elin Runnqvist ◽  
Valérie Chanoine ◽  
Kristof Strijkers ◽  
Chotiga Patamadilok ◽  
Mireille Bonnard ◽  
...  

AbstractAn fMRI study examined how speakers inspect their own speech for errors. In a word production task, we observed enhanced involvement of the right posterior cerebellum for trials that were correct, but on which participants were more likely to make a word-as compared to a non-word error. Furthermore, comparing errors to correctly produced utterances, we observed increased activation of the same cerebellar region, in addition to temporal and medial frontal regions. Within the framework associating the cerebellum to forward modelling of upcoming actions, this indicates that forward models of verbal actions contain information about word representations used for error monitoring even before articulation (internal monitoring). Additional resources relying on speech perception and conflict monitoring are deployed during articulation to detect overt errors (external monitoring). In summary, speech monitoring seems to recruit a network of brain regions serving domain general purposes, even for abstract levels of processing.


2020 ◽  
Vol 32 (6) ◽  
pp. 1079-1091
Author(s):  
Stephanie K. Riès ◽  
Linda Nadalet ◽  
Soren Mickelsen ◽  
Megan Mott ◽  
Katherine J. Midgley ◽  
...  

A domain-general monitoring mechanism is proposed to be involved in overt speech monitoring. This mechanism is reflected in a medial frontal component, the error negativity (Ne), present in both errors and correct trials (Ne-like wave) but larger in errors than correct trials. In overt speech production, this negativity starts to rise before speech onset and is therefore associated with inner speech monitoring. Here, we investigate whether the same monitoring mechanism is involved in sign language production. Twenty deaf signers (American Sign Language [ASL] dominant) and 16 hearing signers (English dominant) participated in a picture–word interference paradigm in ASL. As in previous studies, ASL naming latencies were measured using the keyboard release time. EEG results revealed a medial frontal negativity peaking within 15 msec after keyboard release in the deaf signers. This negativity was larger in errors than correct trials, as previously observed in spoken language production. No clear negativity was present in the hearing signers. In addition, the slope of the Ne was correlated with ASL proficiency (measured by the ASL Sentence Repetition Task) across signers. Our results indicate that a similar medial frontal mechanism is engaged in preoutput language monitoring in sign and spoken language production. These results suggest that the monitoring mechanism reflected by the Ne/Ne-like wave is independent of output modality (i.e., spoken or signed) and likely monitors prearticulatory representations of language. Differences between groups may be linked to several factors including differences in language proficiency or more variable lexical access to motor programming latencies for hearing than deaf signers.


2020 ◽  
Vol 3 ◽  
Author(s):  
Salla-Maaria Laaksonen ◽  
Jesse Haapoja ◽  
Teemu Kinnunen ◽  
Matti Nelimarkka ◽  
Reeta Pöyhtäri

2019 ◽  
Vol 4 (6) ◽  
pp. 1589-1594
Author(s):  
Yvonne van Zaalen ◽  
Isabella Reichel

Purpose Among the best strategies to address inadequate speech monitoring skills and other parameters of communication in people with cluttering (PWC) is the relatively new but very promising auditory–visual feedback (AVF) training ( van Zaalen & Reichel, 2015 ). This study examines the effects of AVF training on articulatory accuracy, pause duration, frequency, and type of disfluencies of PWC, as well as on the emotional and cognitive aspects that may be present in clients with this communication disorder ( Reichel, 2010 ; van Zaalen & Reichel, 2015 ). Methods In this study, 12 male adolescents and adults—6 with phonological and 6 with syntactic cluttering—were provided with weekly AVF training for 12 weeks, with a 3-month follow-up. Data was gathered on baseline (T0), Week 6 (T1), Week 12 (T2), and after follow-up (T3). Spontaneous speech was recorded and analyzed by using digital audio-recording and speech analysis software known as Praat ( Boersma & Weenink, 2017 ). Results The results of this study indicated that PWC demonstrated significant improvements in articulatory rate measurements and in pause duration following the AVF training. In addition, the PWC in the study reported positive effects on their ability to retell a story and to speak in more complete sentences. PWC felt better about formulating their ideas and were more satisfied with their interactions with people around them. Conclusions The AVF training was found to be an effective approach for improving monitoring skills of PWC with both quantitative and qualitative benefits in the behavioral, cognitive, emotional, and social domains of communication.


2019 ◽  
Author(s):  
Mathieu Bourguignon ◽  
Nicola Molinaro ◽  
Mikel Lizarazu ◽  
Samu Taulu ◽  
Veikko Jousmäki ◽  
...  

AbstractTo gain novel insights into how the human brain processes self-produced auditory information during reading aloud, we investigated the coupling between neuromagnetic activity and the temporal envelope of the heard speech sounds (i.e., speech brain tracking) in a group of adults who 1) read a text aloud, 2) listened to a recording of their own speech (i.e., playback), and 3) listened to another speech recording. Coherence analyses revealed that, during reading aloud, the reader’s brain tracked the slow temporal fluctuations of the speech output. Specifically, auditory cortices tracked phrasal structure (<1 Hz) but to a lesser extent than during the two speech listening conditions. Also, the tracking of syllable structure (4–8 Hz) occurred at parietal opercula during reading aloud and at auditory cortices during listening. Directionality analyses based on renormalized partial directed coherence revealed that speech brain tracking at <1 Hz and 4–8 Hz is dominated by speech-to-brain directional coupling during both reading aloud and listening, meaning that speech brain tracking mainly entails auditory feedback processing. Nevertheless, brain-to-speech directional coupling at 4– 8 Hz was enhanced during reading aloud compared with listening, likely reflecting speech monitoring before production. Altogether, these data bring novel insights into how auditory verbal information is tracked by the human brain during perception and self-generation of connected speech.HighlightsThe brain tracks phrasal and syllabic rhythmicity of self-produced (read) speech.Tracking of phrasal structures is attenuated during reading compared with listening.Speech rhythmicity mainly drives brain activity during reading and listening.Brain activity drives syllabic rhythmicity more during reading than listening.


Sign in / Sign up

Export Citation Format

Share Document