auditory representation
Recently Published Documents


TOTAL DOCUMENTS

65
(FIVE YEARS 9)

H-INDEX

15
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Annekathrin Weise ◽  
Sabine Grimm ◽  
Johanna M. Rimmele ◽  
Erich Schröger

Numerous studies revealed that the sound’s basic features like its frequency and intensity including their temporal dynamics are integrated in a unitary representation. That research focused on short, discrete sounds and mainly disregarded how our brain processes long lasting sounds. We review research utilizing the Mismatch Negativity (MMN) event-related potential and neural oscillatory activity for studying representations for long lasting simple sounds such as sinusoidal tones and complex sounds like speech. We report evidence for a critical temporal constraint for the formation of adequate representations for sounds lasting >350 ms. However, we present research showing that the time-variant characteristics (auditory edges) within long lasting sounds exceeding 350 ms enables the formation of auditory representations. We argue that each edge may open an integration window for a sound representation and that the representations established in adjacent temporal windows of integration can be concatenated into an auditory representation of a long sound.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7351
Author(s):  
Dominik Osiński ◽  
Marta Łukowska ◽  
Dag Roar Hjelme ◽  
Michał Wierzchoń

The successful development of a system realizing color sonification would enable auditory representation of the visual environment. The primary beneficiary of such a system would be people that cannot directly access visual information—the visually impaired community. Despite the plethora of sensory substitution devices, developing systems that provide intuitive color sonification remains a challenge. This paper presents design considerations, development, and the usability audit of a sensory substitution device that converts spatial color information into soundscapes. The implemented wearable system uses a dedicated color space and continuously generates natural, spatialized sounds based on the information acquired from a camera. We developed two head-mounted prototype devices and two graphical user interface (GUI) versions. The first GUI is dedicated to researchers, and the second has been designed to be easily accessible for visually impaired persons. Finally, we ran fundamental usability tests to evaluate the new spatial color sonification algorithm and to compare the two prototypes. Furthermore, we propose recommendations for the development of the next iteration of the system.


2021 ◽  
Vol 12 ◽  
Author(s):  
Si Chen ◽  
Yike Yang ◽  
Ratree Wayland

Purpose: This study is to investigate whether Cantonese-speaking musicians may show stronger CP than Cantonese-speaking non-musicians in perceiving pitch directions generated based on Mandarin tones. It also aims to examine whether musicians may be more effective in processing stimuli and more sensitive to subtle differences caused by vowel quality.Methods: Cantonese-speaking musicians and non-musicians performed a categorical identification and a discrimination task on rising and falling continua of fundamental frequency generated based on Mandarin level, rising and falling tones on two vowels with nine duration values.Results: Cantonese-speaking musicians exhibited a stronger categorical perception (CP) of pitch contours than non-musicians based on the identification and discrimination tasks. Compared to non-musicians, musicians were also more sensitive to the change of stimulus duration and to the intrinsic F0 in pitch perception in pitch processing.Conclusion: The CP was strengthened due to musical experience and musicians benefited more from increased stimulus duration and were more efficient in pitch processing. Musicians might be able to better use the extra time to form an auditory representation with more acoustic details. Even with more efficiency in pitch processing, musicians' ability to detect subtle pitch changes caused by intrinsic F0 was not undermined, which is likely due to their superior ability to process temporal information. These results thus suggest musicians may have a great advantage in learning tones of a second language.


2021 ◽  
Vol 25 ◽  
pp. 233121652110012
Author(s):  
Thomas Biberger ◽  
Henning Schepker ◽  
Florian Denk ◽  
Stephan D. Ewert

Smart headphones or hearables use different types of algorithms such as noise cancelation, feedback suppression, and sound pressure equalization to eliminate undesired sound sources or to achieve acoustical transparency. Such signal processing strategies might alter the spectral composition or interaural differences of the original sound, which might be perceived by listeners as monaural or binaural distortions and thus degrade audio quality. To evaluate the perceptual impact of these distortions, subjective quality ratings can be used, but these are time consuming and costly. Auditory-inspired instrumental quality measures can be applied with less effort and may also be helpful in identifying whether the distortions impair the auditory representation of monaural or binaural cues. Therefore, the goals of this study were (a) to assess the applicability of various monaural and binaural audio quality models to distortions typically occurring in hearables and (b) to examine the effect of those distortions on the auditory representation of spectral, temporal, and binaural cues. Results showed that the signal processing algorithms considered in this study mainly impaired (monaural) spectral cues. Consequently, monaural audio quality models that capture spectral distortions achieved the best prediction performance. A recent audio quality model that predicts monaural and binaural aspects of quality was revised based on parts of the current data involving binaural audio quality aspects, leading to improved overall performance indicated by a mean Pearson linear correlation of 0.89 between obtained and predicted ratings.


Author(s):  
Anzhelina Mamykina ◽  
Alla Grinchenko

The article is devoted to the phenomenon of auditory representations, their content, compliance with musical content, formation, tuning and implementation in the performance process of the future teacher during piano training. The purpose of the article is to substantiate the pedagogical conditions and develop the methods aimed at forming of the future music teacher’s musical-auditory representations. The goal is realised through the implementation of relevant tasks using the methods of theoretical research: analysis, synthesis, deduction, systematisation, pedagogical observation. In the article musical-auditory representation is considered as a professional skill of a musician, formed on the basis of the understanding of semantics – semantic units of musical language, which facilitates the qualitative reproduction of artistic and figurative content and maximum efficiency of musician’s own performance process in creation of artistic-pedagogical and performing interpretations. The list of the skills acquired by the applicant during the formation of auditory perceptions is specified, namely: analytical (to identify and understand the symbolism of musical language; its genre and style) (compare semantic constructs of different musical directions); reflexive, figurative-auditory and sensorimotor. The pedagogical conditions for the formation of auditory representations are offered: gradual expansion of a musical thesaurus in the course of the profession-centred training; stimulating auditory perception by recognising and understanding the elements of musical language; the direction of musical-perceptual experience on the coordination of auditory representations with the sign-semantic context of the performed works. In accordance with the defined conditions, a number of methods have been developed: comparative textual analysis; figurative and auditory analysis; perceptual-auditory analysis; associative music model; semantic identification; artistic reincarnation; polytonation expressiveness; tactile-auditory method. Further research involves the development of the future Arts teachers’ auditory perceptions in classes on accompaniment and ensemble playing, taking into account the specifics of these subjects.


2020 ◽  
Author(s):  
Mattioni Stefania ◽  
Rezk Mohamed ◽  
Ceren Battal ◽  
Jyothirmayi Vadlamudi ◽  
Collignon Olivier

AbstractVisual deprivation triggers enhanced dependence on auditory representation. It was suggested that (auditory) temporal regions sharpen their response to sounds in visually deprived people. In contrast with such view, we show that the coding of sound categories is enhanced in the occipital but, importantly, reduced in the temporal cortex of early and, to a lesser extent, of late blind people. Importantly, the representation of sound categories in occipital and temporal regions is characterized by a similar ‘human-centric’ structure in blind people, supporting the idea that these decreased intramodal and increased crossmodal representations are linked. We suggest that early, and to some extent late blindness, induces network-level reorganization of the neurobiology of sound categories by concomitantly increasing/decreasing the respective computational load of occipital/temporal regions.


2020 ◽  
Vol 117 (26) ◽  
pp. 15242-15252 ◽  
Author(s):  
Denis Archakov ◽  
Iain DeWitt ◽  
Paweł Kuśmierek ◽  
Michael Ortiz-Rios ◽  
Daniel Cameron ◽  
...  

Human speech production requires the ability to couple motor actions with their auditory consequences. Nonhuman primates might not have speech because they lack this ability. To address this question, we trained macaques to perform an auditory–motor task producing sound sequences via hand presses on a newly designed device (“monkey piano”). Catch trials were interspersed to ascertain the monkeys were listening to the sounds they produced. Functional MRI was then used to map brain activity while the animals listened attentively to the sound sequences they had learned to produce and to two control sequences, which were either completely unfamiliar or familiar through passive exposure only. All sounds activated auditory midbrain and cortex, but listening to the sequences that were learned by self-production additionally activated the putamen and the hand and arm regions of motor cortex. These results indicate that, in principle, monkeys are capable of forming internal models linking sound perception and production in motor regions of the brain, so this ability is not special to speech in humans. However, the coupling of sounds and actions in nonhuman primates (and the availability of an internal model supporting it) seems not to extend to the upper vocal tract, that is, the supralaryngeal articulators, which are key for the production of speech sounds in humans. The origin of speech may have required the evolution of a “command apparatus” similar to the control of the hand, which was crucial for the evolution of tool use.


2019 ◽  
Author(s):  
Norbert Kopčo ◽  
Peter Lokša ◽  
I-fan Lin ◽  
Jennifer Groh ◽  
Barbara Shinn-Cunningham

ABSTRACTVisual calibration of auditory space requires re-alignment of representations differing in 1) format (auditory hemispheric channels vs. visual maps) and 2) reference frames (head-centered vs. eye-centered). Here, a ventriloquism paradigm from Kopčo et al. (J Neurosci, 29, 13809-13814) was used to examine these processes in humans and monkeys for ventriloquism induced within one spatial hemifield. Results show that 1) the auditory representation is adapted even by aligned audio-visual stimuli, and 2) the spatial reference frame is primarily head-centered in humans but mixed in monkeys. These results support the view that the ventriloquism aftereffect is driven by multiple spatially non-uniform processes.PACS numbers: 43.66.Pn, 43.66.Qp, 43.66.Mk


Author(s):  
Denty Marga Sukma ◽  
Joko Nurkamto ◽  
Nur Arifah Drajati

<p>The understanding of knowledge transfer and information delivery is recently in the broader scope due to the development of educational technology. The information delivery is not merely done using verbal message; however, multiple modes of presentation such as verbal and auditory representation can also be the alternative of material delivery. The studies featuring the use of multimedia-based presentation are mostly administered to determine its effectiveness to be implemented in the learning process. In contrast, the exploration of the use of multimedia-based presentation toward the way how it can be a means of interaction seems underexplored. Therefore, to make it be more precise, the present study attempts to explore the practice of multimedia-based presentation in academic speaking classroom and to investigate the interactivity emerged during the presentation process. This study deployed qualitative case study design due to the purpose of gaining the in-depth investigation of the use of multimedia-based presentation and its interactivity emergence in academic speaking classroom. The study was conducted in one of the universities in Surakarta majoring English Education where academic speaking becomes one of the subjects. The presentation document along with the presentation process were analyzed in this study. The results of the analysis show that multimedia-based presentation is designed to visualize the material being conveyed through the icons, pictures, and illustrations that are able to represent the information or knowledge in a more concrete way. Moreover, the interactivity is also emerged through the use of multimedia-based presentation as it simplifies the presenter to do the following: gesturing, dialoguing, and describing. The results implies the opportunity for both teachers and academic speaking presenters to innovate how they present the material by using multimedia-based presentation.  In practice, multimedia-based presentation along with its interactivity can clarify the materials, grab the audience attention, and stimulate the audience responses.</p>


Sign in / Sign up

Export Citation Format

Share Document