speech feedback
Recently Published Documents


TOTAL DOCUMENTS

63
(FIVE YEARS 10)

H-INDEX

15
(FIVE YEARS 1)

Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 361
Author(s):  
Shah Khusro ◽  
Babar Shah ◽  
Inayat Khan ◽  
Sumayya Rahman

Feedback is one of the significant factors for the mental mapping of an environment. It is the communication of spatial information to blind people to perceive the surroundings. The assistive smartphone technologies deliver feedback for different activities using several feedback mediums, including voice, sonification and vibration. Researchers 0have proposed various solutions for conveying feedback messages to blind people using these mediums. Voice and sonification feedback are effective solutions to convey information. However, these solutions are not applicable in a noisy environment and may occupy the most important auditory sense. The privacy of a blind user can also be compromised with speech feedback. The vibration feedback could effectively be used as an alternative approach to these mediums. This paper proposes a real-time feedback system specifically designed for blind people to convey information to them based on vibration patterns. The proposed solution has been evaluated through an empirical study by collecting data from 24 blind people through a mixed-mode survey using a questionnaire. Results show the average recognition accuracy for 10 different vibration patterns are 90%, 82%, 75%, 87%, 65%, and 70%.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258747
Author(s):  
Abigail R. Bradshaw ◽  
Carolyn McGettigan

Joint speech behaviours where speakers produce speech in unison are found in a variety of everyday settings, and have clinical relevance as a temporary fluency-enhancing technique for people who stutter. It is currently unknown whether such synchronisation of speech timing among two speakers is also accompanied by alignment in their vocal characteristics, for example in acoustic measures such as pitch. The current study investigated this by testing whether convergence in voice fundamental frequency (F0) between speakers could be demonstrated during synchronous speech. Sixty participants across two online experiments were audio recorded whilst reading a series of sentences, first on their own, and then in synchrony with another speaker (the accompanist) in a number of between-subject conditions. Experiment 1 demonstrated significant convergence in participants’ F0 to a pre-recorded accompanist voice, in the form of both upward (high F0 accompanist condition) and downward (low and extra-low F0 accompanist conditions) changes in F0. Experiment 2 demonstrated that such convergence was not seen during a visual synchronous speech condition, in which participants spoke in synchrony with silent video recordings of the accompanist. An audiovisual condition in which participants were able to both see and hear the accompanist in pre-recorded videos did not result in greater convergence in F0 compared to synchronisation with the pre-recorded voice alone. These findings suggest the need for models of speech motor control to incorporate interactions between self- and other-speech feedback during speech production, and suggest a novel hypothesis for the mechanisms underlying the fluency-enhancing effects of synchronous speech in people who stutter.


2021 ◽  
Author(s):  
Abigail Bradshaw ◽  
Carolyn McGettigan

Synchronised speech behaviours such as choral speech (speaking in unison) are found in a variety of everyday settings, and have clinical relevance as a temporary fluency-enhancing technique for people who stutter. It is currently unknown whether such synchronisation of speech timing among two speakers is also accompanied by alignment in their vocal characteristics, for example in acoustic measures such as pitch. The current study investigated this by testing whether convergence in voice fundamental frequency (F0) between speakers could be demonstrated during choral speech. Sixty participants across three online experiments were audio recorded whilst reading a series of sentences, first on their own, and then in synchrony with another speaker (the accompanist) in a number of between-subject conditions. Experiment 1 demonstrated significant convergence in participants’ F0 to a pre-recorded accompanist voice, in the form of both upward (high F0 accompanist condition) and downward (low F0 accompanist condition) changes in F0; however, upward convergence was greater than downward convergence. Experiment 2 found that downward convergent changes in F0 could not be increased by the use of an accompanist voice with an even lower F0. Experiment 3 demonstrated that such convergence was not seen during a visual choral speech condition, in which participants spoke in synchrony with silent video recordings of the accompanist. Further, convergence in F0 was enhanced for a condition where participants could both see and hear the accompanist in pre-recorded videos compared to synchronisation with the pre-recorded voice alone. These findings suggest the need for models of speech motor control to incorporate interactions between self- and other-speech feedback during speech production, and suggest a novel hypothesis for the mechanisms underlying the fluency-enhancing effects of choral speech in people who stutter.


NeuroImage ◽  
2020 ◽  
Vol 223 ◽  
pp. 117319 ◽  
Author(s):  
Vincent van de Ven ◽  
Lourens Waldorp ◽  
Ingrid Christoffels

2020 ◽  
Author(s):  
Sophie Meekings ◽  
Kyle Jasmin ◽  
Cesar Lima ◽  
Sophie Scott

AbstractThis study tested the idea that stuttering is caused by over-reliance on auditory feedback. The theory is motivated by the observation that many fluency-inducing situations, such as synchronised speech and masked speech, alter or obscure the talker’s feedback. Typical speakers show ‘speaking-induced suppression’ of neural activation in superior temporal gyrus (STG) during self-produced vocalisation, compared to listening to recorded speech. If people who stutter over-attend to auditory feedback, they may lack this suppression response. In a 1.5T fMRI scanner, people who stutter spoke in synchrony with an experimenter, in synchrony with a recording, on their own, in noise, listened to the experimenter speaking and read silently. Behavioural testing outside the scanner demonstrated that synchronising with another talker resulted in a marked increase in fluency regardless of baseline stuttering severity. In the scanner, participants stuttered most when they spoke alone, and least when they synchronised with a live talker. There was no reduction in STG activity in the Speak Alone condition, when participants stuttered most. There was also strong activity in STG in response to the two synchronised speech conditions, when participants stuttered least, suggesting that either stuttering does not result from over-reliance on feedback, or that the STG activation seen here does not reflect speech feedback monitoring. We discuss this result with reference to neural responses seen in the typical population.


2020 ◽  
Author(s):  
Vincent van de Ven ◽  
Lourens Waldorp ◽  
Ingrid Christoffels

AbstractThere is increasing evidence that the hippocampus is involved in language production and verbal communication, although little is known about its possible role. According to one view, hippocampus contributes semantic memory to spoken language. Alternatively, hippocampus is involved in the processing the (mis)match between expected sensory consequences of speaking and the perceived speech feedback. In the current study, we re-analysed functional magnetic resonance (fMRI) data of two overt picture-naming studies to test whether hippocampus is involved in speech production and, if so, whether the results can distinguish between a “pure memory” versus an “expectation” account of hippocampal involvement. In both studies, participants overtly named pictures during scanning while hearing their own speech feedback unimpededly or impaired by a superimposed noise mask. Results showed decreased hippocampal activity when speech feedback was impaired, compared to when feedback was unimpeded. Further, we found increased functional coupling between auditory cortex and hippocampus during unimpeded speech feedback, compared to impaired feedback. Finally, we found significant functional coupling between a hippocampal/supplementary motor area (SMA) interaction term and auditory cortex, anterior cingulate cortex and cerebellum during overt picture naming, but not during listening to one’s own pre-recorded voice. These findings indicate that hippocampus plays a role in speech production that is in accordance with an “expectation” view of hippocampal functioning.


2019 ◽  
Author(s):  
Ding-lan Tang ◽  
Daniel R. Lametti ◽  
Kate E. Watkins

AbstractSpeaking is one of the most complicated motor behaviours, involving a large number of articulatory muscles which can move independently to command precise changes in speech acoustics. Here, we used real-time manipulations of speech feedback to test whether the acoustics of speech production (e.g. the formants) reflect independently controlled articulatory movements or combinations of movements. During repetitive productions of “head, bed, dead”, either the first (F1) or the second formant (F2) of vowels was shifted and fed back to participants. We then examined whether changes in production in response to these alterations occurred for only the perturbed formant or both formants. In Experiment 1, our results showed that participants who received increased F1 feedback significantly decreased their F1 productions in compensation, but also significantly increased the frequency of their F2 productions. The combined F1-F2 change moved the utterances closer to a known pattern of speech production (i.e. the vowel category “hid, bid, did”). In Experiment 2, we further showed that a downshift in frequency of F2 feedback also induced significant compensatory changes in both the perturbed (F2) and the unperturbed formant (F1) that were in opposite directions. Taken together, the results demonstrate that a shift in auditory feedback of a single formant drives combined changes in related formants. The results suggest that, although formants can be controlled independently, the speech motor system may favour a strategy in which changes in formant production are coupled to maintain speech production within specific regions of the vowel space corresponding to existing speech-sound categories.New & NoteworthyFindings from previous studies examining responses to altered auditory feedback are inconsistent with respect to the changes speakers make to their production. Speakers can compensate by specifically altering their production to offset the acoustic error in feedback. Alternatively, they may compensate by changing their speech production more globally to produce a speech sound closer to an existing category in their repertoire. Our study shows support for the latter strategy.


2019 ◽  
Author(s):  
Daniel Robert Lametti ◽  
Marcus Quek ◽  
Calum Prescott ◽  
John-Stuart Brittain ◽  
Kate E Watkins

Our understanding of the adaptive processes that shape sensorimotor behaviour is largely derived from studying isolated movements. Studies of visuomotor adaptation, in which participants adapt cursor movements to rotations of the cursor’s screen position, have led to prominent theories of motor control. In response to changes in visual feedback of movements, explicit (cognitive) and implicit (automatic) learning processes adapt movements to counter errors. However, movements rarely occur in isolation. The extent to which explicit and implicit processes drive sensorimotor adaptation when multiple movements occur simultaneously, as in the real world, remains unclear. Here, we address this problem in the context of speech and hand movements. Participants spoke in-time with rapid, hand-driven cursor movements. Using real-time auditory alterations of speech feedback, and visual rotations of the cursor’s screen position, we induced sensorimotor adaptation in one or both movements simultaneously. Across three experiments (n = 184), we demonstrate that visuomotor adaptation is markedly impaired by simultaneous speech adaptation, and the impairment is specific to the explicit learning process. In contrast, visuomotor adaptation had no impact on speech adaptation. The results demonstrate that the explicit learning process in visuomotor adaptation is sensitive to movements in other motor domains. They suggest that speech adaptation may lack an explicit learning process.


Sign in / Sign up

Export Citation Format

Share Document