scholarly journals Towards Neurocomputational Speech and Sound Processing

Author(s):  
Jean Rouat ◽  
Stéphane Loiselle ◽  
Ramin Pichevar
Keyword(s):  
Neuroreport ◽  
2006 ◽  
Vol 17 (11) ◽  
pp. 1225-1228 ◽  
Author(s):  
Mari Tervaniemi ◽  
Anu Castaneda ◽  
Monja Knoll ◽  
Maria Uther

Author(s):  
Eva Eglāja Kristsone ◽  
Signe Raudive

Keywords: children’s poetry, public engagement, reading aloud, recording of poetry, Veidenbaums The development of public engagement technologies has provided new ways of ensuring societal participation. Public engagement events developed by various institutions provide ways to combine learning about cultural heritage with individual participants. Poetry readings serve as one of the ways the sound of Latvian literature and particularly Latvian classical poetry can be updated. The authors of this article analyse the first two public engagement actions (“Skandē Veidenbaumu” and “Lasīsim dzejiņas” of the series “Lasi skaļi” (Read Aloud) launched by the Institute of Literature, Folklore, and Art of the University of Latvia. During these events, participants were given the opportunity to record thematically-selected poems in the audio recording booth of the Latvian National Library or, as an alternative, to record a poem on their computer or mobile device and upload them to the action site. The events combined the creation of a recorded body of poetry readings with related educational content and represent one of the newer educational methods for reaching the general public and some of its subgroups (children, pupils, students, etc.). Through these events, the public was given the opportunity to become acquainted with Latvian cultural heritage while simultaneously creating new cultural artifacts. The participants creatively used different approaches of performance, recording the poems in a variety of voices, singing, or even incorporating digital sound processing programmes. They actively seized on the opportunity to create new versions of poems that had already been set to music. The main reasons for rejecting any particular recording were buffoonery or cursing during the recording process, or having left the recording unfinished. Both events resulted in more than 4,500 audio recordings which were then stored in the digital archive of the Institute. The set of recordings could be of interest to researchers in the fields of linguistics, sociolinguistics and computer linguistics, as it provides a unique representation of pronunciation during a specific period of time performed by people of different ages, genders, and nationalities.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Jennifer Resnik ◽  
Daniel B Polley

Cortical neurons remap their receptive fields and rescale sensitivity to spared peripheral inputs following sensory nerve damage. To address how these plasticity processes are coordinated over the course of functional recovery, we tracked receptive field reorganization, spontaneous activity, and response gain from individual principal neurons in the adult mouse auditory cortex over a 50-day period surrounding either moderate or massive auditory nerve damage. We related the day-by-day recovery of sound processing to dynamic changes in the strength of intracortical inhibition from parvalbumin-expressing (PV) inhibitory neurons. Whereas the status of brainstem-evoked potentials did not predict the recovery of sensory responses to surviving nerve fibers, homeostatic adjustments in PV-mediated inhibition during the first days following injury could predict the eventual recovery of cortical sound processing weeks later. These findings underscore the potential importance of self-regulated inhibitory dynamics for the restoration of sensory processing in excitatory neurons following peripheral nerve injuries.


2013 ◽  
Vol 10 (1) ◽  
pp. 483-501 ◽  
Author(s):  
Bernd Tessendorf ◽  
Matjaz Debevc ◽  
Peter Derleth ◽  
Manuela Feilner ◽  
Franz Gravenhorst ◽  
...  

Hearing instruments (HIs) have become context-aware devices that analyze the acoustic environment in order to automatically adapt sound processing to the user?s current hearing wish. However, in the same acoustic environment an HI user can have different hearing wishes requiring different behaviors from the hearing instrument. In these cases, the audio signal alone contains too little contextual information to determine the user?s hearing wish. Additional modalities to sound can provide the missing information to improve the adaption. In this work, we review additional modalities to sound in HIs and present a prototype of a newly developed wireless multimodal hearing system. The platform takes into account additional sensor modalities such as the user?s body movement and location. We characterize the system regarding runtime, latency and reliability of the wireless connection, and point out possibilities arising from the novel approach.


2012 ◽  
Vol 17 (1) ◽  
pp. 62-72 ◽  
Author(s):  
Owen Vallis ◽  
Dimitri Diakopoulos ◽  
Jordan Hochenbaum ◽  
Ajay Kapur

Historically, network music has explored the practice and theory of interconnectivity, utilising the network itself as a creative instrument. The Machine Orchestra (TMO) has extended this historical idea by developing the custom software suite Signal, and creating a shared, social instrument consisting of musical robotics. Signal is a framework for musical synchronisation and data sharing, designed to support the use of musical robotics in an attempt to more fully address ideas of interconnectivity and embodied performance. Signal, in combination with musical robotics, also facilitates the exploration of interaction contexts, such as at the note level, score level and sound-processing level. In this way, TMO is simultaneously building upon the historical contributions and developing aesthetics of network music.


Sign in / Sign up

Export Citation Format

Share Document