scholarly journals Individual variability in auditory feedback processing: Responses to real-time formant perturbations and their relation to perceptual acuity

2020 ◽  
Vol 148 (6) ◽  
pp. 3709-3721
Author(s):  
Daniel R. Nault ◽  
Kevin G. Munhall
2019 ◽  
Vol 72 (10) ◽  
pp. 2371-2379 ◽  
Author(s):  
Matthias K Franken ◽  
Daniel J Acheson ◽  
James M McQueen ◽  
Peter Hagoort ◽  
Frank Eisner

Previous research on the effect of perturbed auditory feedback in speech production has focused on two types of responses. In the short term, speakers generate compensatory motor commands in response to unexpected perturbations. In the longer term, speakers adapt feedforward motor programmes in response to feedback perturbations, to avoid future errors. The current study investigated the relation between these two types of responses to altered auditory feedback. Specifically, it was hypothesised that consistency in previous feedback perturbations would influence whether speakers adapt their feedforward motor programmes. In an altered auditory feedback paradigm, formant perturbations were applied either across all trials (the consistent condition) or only to some trials, whereas the others remained unperturbed (the inconsistent condition). The results showed that speakers’ responses were affected by feedback consistency, with stronger speech changes in the consistent condition compared with the inconsistent condition. Current models of speech-motor control can explain this consistency effect. However, the data also suggest that compensation and adaptation are distinct processes, which are not in line with all current models.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Kamalini G. Ranasinghe ◽  
Hardik Kothare ◽  
Naomi Kort ◽  
Leighton B. Hinkley ◽  
Alexander J. Beagle ◽  
...  

2015 ◽  
Vol 43 ◽  
pp. 216-228 ◽  
Author(s):  
Jérémy Danna ◽  
Maureen Fontaine ◽  
Vietminh Paz-Villagrán ◽  
Charles Gondre ◽  
Etienne Thoret ◽  
...  
Keyword(s):  

2021 ◽  
Vol 12 ◽  
Author(s):  
Angel David Blanco ◽  
Simone Tassani ◽  
Rafael Ramirez

Auditory-guided vocal learning is a mechanism that operates both in humans and other animal species making us capable to imitate arbitrary sounds. Both auditory memories and auditory feedback interact to guide vocal learning. This may explain why it is easier for humans to imitate the pitch of a human voice than the pitch of a synthesized sound. In this study, we compared the effects of two different feedback modalities in learning pitch-matching abilities using a synthesized pure tone in 47 participants with no prior music experience. Participants were divided into three groups: a feedback group (N = 15) receiving real-time visual feedback of their pitch as well as knowledge of results; an equal-timbre group (N = 17) receiving additional auditory feedback of the target note with a similar timbre to the instrument being used (i.e., violin or human voice); and a control group (N = 15) practicing without any feedback or knowledge of results. An additional fourth group of violin experts performed the same task for comparative purposes (N = 15). All groups were posteriorly evaluated in a transfer phase. Both experimental groups (i.e., the feedback and equal-timbre groups) improved their intonation abilities with the synthesized sound after receiving feedback. Participants from the equal-timber group seemed as capable as the feedback group of producing the required pitch with the voice after listening to the human voice, but not with the violin (although they also showed improvement). In addition, only participants receiving real-time visual feedback learned and retained in the transfer phase the mapping between the synthesized pitch and its correspondence with the produced vocal or violin pitch. It is suggested that both the effect of an objective external reward, together with the experience of exploring the pitch space with their instrument in an explicit manner, helped participants to understand how to control their pitch production, strengthening their schemas, and favoring retention.


2020 ◽  
Vol 63 (8) ◽  
pp. 2522-2534 ◽  
Author(s):  
Kwang S. Kim ◽  
Hantao Wang ◽  
Ludo Max

Purpose Various aspects of speech production related to auditory–motor integration and learning have been examined through auditory feedback perturbation paradigms in which participants' acoustic speech output is experimentally altered and played back via earphones/headphones “in real time.” Scientific rigor requires high precision in determining and reporting the involved hardware and software latencies. Many reports in the literature, however, are not consistent with the minimum achievable latency for a given experimental setup. Here, we focus specifically on this methodological issue associated with implementing real-time auditory feedback perturbations, and we offer concrete suggestions for increased reproducibility in this particular line of work. Method Hardware and software latencies as well as total feedback loop latency were measured for formant perturbation studies with the Audapter software. Measurements were conducted for various audio interfaces, desktop and laptop computers, and audio drivers. An approach for lowering Audapter's software latency through nondefault parameter specification was also tested. Results Oft-overlooked hardware-specific latencies were not negligible for some of the tested audio interfaces (adding up to 15 ms). Total feedback loop latencies (including both hardware and software latency) were also generally larger than claimed in the literature. Nondefault parameter values can improve Audapter's own processing latency without negative impact on formant tracking. Conclusions Audio interface selection and software parameter optimization substantially affect total feedback loop latency. Thus, the actual total latency (hardware plus software) needs to be correctly measured and described in all published reports. Future speech research with “real-time” auditory feedback perturbations should increase scientific rigor by minimizing this latency.


2021 ◽  
Vol 14 ◽  
Author(s):  
Bruno Direito ◽  
Manuel Ramos ◽  
João Pereira ◽  
Alexandre Sayal ◽  
Teresa Sousa ◽  
...  

Introduction: The potential therapeutic efficacy of real-time fMRI Neurofeedback has received increasing attention in a variety of psychological and neurological disorders and as a tool to probe cognition. Despite its growing popularity, the success rate varies significantly, and the underlying neural mechanisms are still a matter of debate. The question whether an individually tailored framework positively influences neurofeedback success remains largely unexplored.Methods: To address this question, participants were trained to modulate the activity of a target brain region, the visual motion area hMT+/V5, based on the performance of three imagery tasks with increasing complexity: imagery of a static dot, imagery of a moving dot with two and with four opposite directions. Participants received auditory feedback in the form of vocalizations with either negative, neutral or positive valence. The modulation thresholds were defined for each participant according to the maximum BOLD signal change of their target region during the localizer run.Results: We found that 4 out of 10 participants were able to modulate brain activity in this region-of-interest during neurofeedback training. This rate of success (40%) is consistent with the neurofeedback literature. Whole-brain analysis revealed the recruitment of specific cortical regions involved in cognitive control, reward monitoring, and feedback processing during neurofeedback training. Individually tailored feedback thresholds did not correlate with the success level. We found region-dependent neuromodulation profiles associated with task complexity and feedback valence.Discussion: Findings support the strategic role of task complexity and feedback valence on the modulation of the network nodes involved in monitoring and feedback control, key variables in neurofeedback frameworks optimization. Considering the elaborate design, the small sample size here tested (N = 10) impairs external validity in comparison to our previous studies. Future work will address this limitation. Ultimately, our results contribute to the discussion of individually tailored solutions, and justify further investigation concerning volitional control over brain activity.


Author(s):  
Belkacem Abdelkader ◽  
Yoshimura Natsue ◽  
Shin Duk ◽  
Kambara Hiroyuki ◽  
Koike Yasuharu

2020 ◽  
Author(s):  
Robin Karlin ◽  
Benjamin Parrell ◽  
Chris Naber

Real-time altered auditory feedback has demonstrated a key role for auditory feedback in both online feedback control and in updating feedforward control for future utterances. Much of this research has examined control in the spectral domain, and has found that speakers compensate for perturbations to vowel formants, intensity, and fricative center of gravity. The aim of the current study is to examine adaptation in response to temporal perturbation, using real-time perturbation of ongoing speech. Word-initial consonant targets (VOT for /k, g/ and fricative duration for /s, z/) were lengthened and the following stressed vowel (/æ/) was shortened. Overall, speakers did not adapt to lengthened consonants, but did lengthen vowels by nearly 100\% of the perturbation magnitude in response to shortening. Vowel lengthening showed continued aftereffects during a washout phase when perturbation was abruptly removed. Although speakers did not actively adapt consonant durations, the adaptation in vowel duration leads to the consonant taking up an overall smaller proportion of the syllable, aligning with previous research that suggests that speakers attend to proportional durations rather than absolute durations. These results indicate that speakers actively monitor duration and update upcoming plans accordingly.


Sign in / Sign up

Export Citation Format

Share Document