scholarly journals The effect of real-time auditory feedback on learning new characters

2015 ◽  
Vol 43 ◽  
pp. 216-228 ◽  
Author(s):  
Jérémy Danna ◽  
Maureen Fontaine ◽  
Vietminh Paz-Villagrán ◽  
Charles Gondre ◽  
Etienne Thoret ◽  
...  
Keyword(s):  
2021 ◽  
Vol 12 ◽  
Author(s):  
Angel David Blanco ◽  
Simone Tassani ◽  
Rafael Ramirez

Auditory-guided vocal learning is a mechanism that operates both in humans and other animal species making us capable to imitate arbitrary sounds. Both auditory memories and auditory feedback interact to guide vocal learning. This may explain why it is easier for humans to imitate the pitch of a human voice than the pitch of a synthesized sound. In this study, we compared the effects of two different feedback modalities in learning pitch-matching abilities using a synthesized pure tone in 47 participants with no prior music experience. Participants were divided into three groups: a feedback group (N = 15) receiving real-time visual feedback of their pitch as well as knowledge of results; an equal-timbre group (N = 17) receiving additional auditory feedback of the target note with a similar timbre to the instrument being used (i.e., violin or human voice); and a control group (N = 15) practicing without any feedback or knowledge of results. An additional fourth group of violin experts performed the same task for comparative purposes (N = 15). All groups were posteriorly evaluated in a transfer phase. Both experimental groups (i.e., the feedback and equal-timbre groups) improved their intonation abilities with the synthesized sound after receiving feedback. Participants from the equal-timber group seemed as capable as the feedback group of producing the required pitch with the voice after listening to the human voice, but not with the violin (although they also showed improvement). In addition, only participants receiving real-time visual feedback learned and retained in the transfer phase the mapping between the synthesized pitch and its correspondence with the produced vocal or violin pitch. It is suggested that both the effect of an objective external reward, together with the experience of exploring the pitch space with their instrument in an explicit manner, helped participants to understand how to control their pitch production, strengthening their schemas, and favoring retention.


2020 ◽  
Vol 63 (8) ◽  
pp. 2522-2534 ◽  
Author(s):  
Kwang S. Kim ◽  
Hantao Wang ◽  
Ludo Max

Purpose Various aspects of speech production related to auditory–motor integration and learning have been examined through auditory feedback perturbation paradigms in which participants' acoustic speech output is experimentally altered and played back via earphones/headphones “in real time.” Scientific rigor requires high precision in determining and reporting the involved hardware and software latencies. Many reports in the literature, however, are not consistent with the minimum achievable latency for a given experimental setup. Here, we focus specifically on this methodological issue associated with implementing real-time auditory feedback perturbations, and we offer concrete suggestions for increased reproducibility in this particular line of work. Method Hardware and software latencies as well as total feedback loop latency were measured for formant perturbation studies with the Audapter software. Measurements were conducted for various audio interfaces, desktop and laptop computers, and audio drivers. An approach for lowering Audapter's software latency through nondefault parameter specification was also tested. Results Oft-overlooked hardware-specific latencies were not negligible for some of the tested audio interfaces (adding up to 15 ms). Total feedback loop latencies (including both hardware and software latency) were also generally larger than claimed in the literature. Nondefault parameter values can improve Audapter's own processing latency without negative impact on formant tracking. Conclusions Audio interface selection and software parameter optimization substantially affect total feedback loop latency. Thus, the actual total latency (hardware plus software) needs to be correctly measured and described in all published reports. Future speech research with “real-time” auditory feedback perturbations should increase scientific rigor by minimizing this latency.


Author(s):  
Belkacem Abdelkader ◽  
Yoshimura Natsue ◽  
Shin Duk ◽  
Kambara Hiroyuki ◽  
Koike Yasuharu

2020 ◽  
Author(s):  
Robin Karlin ◽  
Benjamin Parrell ◽  
Chris Naber

Real-time altered auditory feedback has demonstrated a key role for auditory feedback in both online feedback control and in updating feedforward control for future utterances. Much of this research has examined control in the spectral domain, and has found that speakers compensate for perturbations to vowel formants, intensity, and fricative center of gravity. The aim of the current study is to examine adaptation in response to temporal perturbation, using real-time perturbation of ongoing speech. Word-initial consonant targets (VOT for /k, g/ and fricative duration for /s, z/) were lengthened and the following stressed vowel (/æ/) was shortened. Overall, speakers did not adapt to lengthened consonants, but did lengthen vowels by nearly 100\% of the perturbation magnitude in response to shortening. Vowel lengthening showed continued aftereffects during a washout phase when perturbation was abruptly removed. Although speakers did not actively adapt consonant durations, the adaptation in vowel duration leads to the consonant taking up an overall smaller proportion of the syllable, aligning with previous research that suggests that speakers attend to proportional durations rather than absolute durations. These results indicate that speakers actively monitor duration and update upcoming plans accordingly.


2016 ◽  
Vol 140 (4) ◽  
pp. 3425-3425
Author(s):  
Andrew Lucila ◽  
Franklin Roque ◽  
Michael Morgan ◽  
Michael S. Gordon

2019 ◽  
Vol 145 (3) ◽  
pp. 1914-1914
Author(s):  
Miriam Oschkinat ◽  
Eva Reinisch ◽  
Philip Hoole
Keyword(s):  

2011 ◽  
Vol 29 (supplement) ◽  
pp. 433-449 ◽  
Author(s):  
Isabelle Viaud-Delmon ◽  
Jane Mason ◽  
Karim Haddad ◽  
Markus Noisternig ◽  
Frédéric Bevilacqua ◽  
...  

In the last 4 years, we have developed a partnership between dance and neuroscience to study the relationships between body space in dance and the surrounding space, and the link between movement and audition as experienced by the dancer. The opportunity to work with a dancer/choreographer, an expert in movement, gives neuroscientists better access to the significance of the auditory-motor loop and its role in perception of the surrounding space. Given that a dancer has a very strong sense of body ownership (probably through a very accurate dynamic body schema) ( Walsh et al. 2011 ), she is an ideal subject to investigate the feeling of controlling one's own body movements, and, through them, events in the external environment ( Moore et al. 2009 , Jola et al in press). We conducted several work sessions, which brought together a choreographer/dancer, a neuroscientist, a composer, and two researchers in acoustics and audio signal processing. These sessions were held at IRCAM (Institute for Research and Coordination Acoustic/Music, Paris) in a variable-acoustics concert hall equipped with a Wave Field Synthesis (WFS) sound reproduction system and infrared cameras for motion capture. During these work sessions, we concentrated on two specific questions: 1) is it possible to extend the body space of the dancer through auditory feedback ( Maravita and Iriki 2004 )? and 2) can we alter the dancer's perception of space by altering perceptions associated with movements? We used an interactive setup in which a collection of pre-composed sound events (individual sounds or musical sentences) could be transformed and rendered in real time according to the movements and the position of the dancer, that were sensed by markers on her body and detected by a motion tracking system. The transformations applied to the different sound components through the dancer's movement and position concerned not only musical parameters such as intensity, timbre, etc. but also the spatial parameters of the sounds. The technology we used allowed us to control their trajectory in space, apparent distance and the sound reverberation ambiance. We elaborated a catalogue of interaction modes with auditory settings that changed according to the dancer's movements. An interaction mode is defined by different mappings of position, posture or gesture of the dancer to musical and spatial parameters. For instance, a sound event may be triggered if the dancer is within a certain region or if she performs a predefined gesture. More elaborated modes involved the modulation of musical parameters by continuous movements of the dancer. The pertinence at a perceptive and cognitive level of the catalogue of interactions has been tested throughout the sessions. We observed that the detachable markers could be used to create a perception of extended body space, and that the performer perceived the stage space differently according to the auditory feedback of her action. The dancer reported that each experience with the technology shed light on her need for greater awareness and exploration of her relationships with space. Real-time interactivity with sound heightened her physical awareness – as though the stage itself took on a role and became another character.


Sign in / Sign up

Export Citation Format

Share Document