simultaneous sounds
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 4)

H-INDEX

3
(FIVE YEARS 0)

2021 ◽  
Vol 15 ◽  
Author(s):  
Mark D. Fletcher

Cochlear implants (CIs) have been remarkably successful at restoring hearing in severely-to-profoundly hearing-impaired individuals. However, users often struggle to deconstruct complex auditory scenes with multiple simultaneous sounds, which can result in reduced music enjoyment and impaired speech understanding in background noise. Hearing aid users often have similar issues, though these are typically less acute. Several recent studies have shown that haptic stimulation can enhance CI listening by giving access to sound features that are poorly transmitted through the electrical CI signal. This “electro-haptic stimulation” improves melody recognition and pitch discrimination, as well as speech-in-noise performance and sound localization. The success of this approach suggests it could also enhance auditory perception in hearing-aid users and other hearing-impaired listeners. This review focuses on the use of haptic stimulation to enhance music perception in hearing-impaired listeners. Music is prevalent throughout everyday life, being critical to media such as film and video games, and often being central to events such as weddings and funerals. It represents the biggest challenge for signal processing, as it is typically an extremely complex acoustic signal, containing multiple simultaneous harmonic and inharmonic sounds. Signal-processing approaches developed for enhancing music perception could therefore have significant utility for other key issues faced by hearing-impaired listeners, such as understanding speech in noisy environments. This review first discusses the limits of music perception in hearing-impaired listeners and the limits of the tactile system. It then discusses the evidence around integration of audio and haptic stimulation in the brain. Next, the features, suitability, and success of current haptic devices for enhancing music perception are reviewed, as well as the signal-processing approaches that could be deployed in future haptic devices. Finally, the cutting-edge technologies that could be exploited for enhancing music perception with haptics are discussed. These include the latest micro motor and driver technology, low-power wireless technology, machine learning, big data, and cloud computing. New approaches for enhancing music perception in hearing-impaired listeners could substantially improve quality of life. Furthermore, effective haptic techniques for providing complex sound information could offer a non-invasive, affordable means for enhancing listening more broadly in hearing-impaired individuals.


2021 ◽  
Vol 15 (1) ◽  
pp. 3-35
Author(s):  
MICHAEL BAUMGARTNER

In an interview discussing Prénom: Carmen (1983), Jean-Luc Godard underlines the correlation between the processes of music and filmmaking: ‘Making a film is like performing a quartet’. The emphasis on such a relationship between these two virtually different modes of artistic expression, the act of reflecting upon art in general, and the final artwork, represents Godard’s primary concern in this film. In order to emphasise this self-reflexive stance in Prénom: Carmen, the footage of the Quatuor Prat rehearsing Ludwig van Beethoven’s string quartets is intertwined with fictional material narrating a contemporary version of the Carmen myth. With this alternation, Godard conveys that his conception of cinema emerges from observing how performers create music. Music-making is thus as much a hands-on endeavour as filmmaking itself. Since we are limited to having two hands to edit the soundtrack and mix and arrange the different sounds, we consequently can hear only two sounds at the same time. With this self-inflicted limitation, Godard shapes the soundtrack of Prénom: Carmen with only two simultaneous sounds. Such an overtly self-conscious approach to film sound shifts the focus onto Beethoven’s music, not only as an artistic key device, but also as an alien within the surprisingly complex soundscape and more generally also within the contemporary Carmen story.


2020 ◽  
Author(s):  
Shawn M. Willett ◽  
Jennifer M. Groh

AbstractHow we distinguish multiple simultaneous stimuli is uncertain, particularly given that such stimuli sometimes recruit largely overlapping populations of neurons. One hypothesis is that tuning curves might change to limit the number of stimuli driving any given neuron when multiple stimuli are present. To test this hypothesis, we recorded the activity of neurons in the inferior colliculus while monkeys localized either one or two simultaneous sounds differing in frequency. Although monkeys easily distinguished simultaneous sounds (∼90% correct performance), the frequency tuning of inferior colliculus neurons on dual sound trials did not improve in any obvious way. Frequency selectivity was degraded on dual sound trials compared to single sound trials: tuning curves broadened, and frequency accounted for less of the variance in firing rate. These tuning curve changes led a maximum-likelihood decoder to perform worse on dual sound trials than on single sound trials. These results fail to support the hypothesis that changes in frequency response functions serve to reduce the overlap in the representation of simultaneous sounds. Instead these results suggest alternative theories, such as recent evidence of alternations in firing rate between the rates corresponding to each of the two stimuli, offer a more promising approach.


2020 ◽  
Author(s):  
Michael W. Weiss ◽  
Laura Cirelli ◽  
Josh Mcdermott ◽  
Sandra E. Trehub

Many scholars consider preferences for consonance, as defined by Western music theorists, to be based primarily on biological factors, while others emphasize experiential factors, notably the nature of musical exposure. Cross-cultural experiments suggest that consonance preferences are shaped by musical experience, implying that preferences should emerge or become stronger over development for individuals in Western cultures. However, little is known about this developmental trajectory. We measured preferences for the consonance of simultaneous sounds and related acoustic properties in children and adults to characterize their developmental course and dependence on musical experience. In Study 1, adults and children 6 to 10 years of age rated their liking of simultaneous tone combinations (dyads) and affective vocalizations. Preferences for consonance increased with age and were predicted by changing preferences for harmonicity-the degree to which a sound's frequencies are multiples of a common fundamental frequency-but not by evaluations of beating-fluctuations in amplitude that occur when frequencies are close but not identical, producing the sensation of acoustic roughness. In Study 2, musically trained adults and 10-year-old children also rated the same stimuli. Age and musical training were associated with enhanced preference for consonance. Both measures of experience were associated with an enhanced preference for harmonicity, but were unrelated to evaluations of beating stimuli. The findings are consistent with cross-cultural evidence and the effects of musicianship in Western adults in linking Western musical experience to preferences for consonance and harmonicity.


Author(s):  
Robert Hasegawa

Just intonation is a system of tuning musical intervals based on simple ratios between the frequencies of their constituent pitches. For voices and most musical instruments, just intonation minimizes the acoustical interference between simultaneous sounds and leads to the highest degree of blending and consonance. Though its roots are ancient, twentieth-century composers revived just intonation towards new esthetic ends. The idea of using ratios to quantify interval size originated in ancient Greek music theory: In Pythagorean intonation, all intervals are measured with ratios made solely of multiples of the integers 2 and 3. In response to the growing use of thirds and sixths in the fifteenth century, Renaissance theorists expanded Pythagorean intonation to include multiples of 5, replacing the tense Pythagorean major third, 81/64, with the mellifluous just major third, 5/4—in all ratio-based tunings, simpler ratios produce smoother, more consonant intervals. Musicologists typically reserve the term ‘‘just intonation’’ for this Renaissance system, though it is also used metonymically to refer to all ratio-based tuning systems.


2017 ◽  
Vol 35 (2) ◽  
pp. 144-164 ◽  
Author(s):  
Sven-Amin Lembke ◽  
Scott Levine ◽  
Stephen McAdams

Achieving a blended timbre between two instruments is a common aim of orchestration. It relates to the auditory fusion of simultaneous sounds and can be linked to several acoustic factors (e.g., temporal synchrony, harmonicity, spectral relationships). Previous research has left unanswered if and how musicians control these factors during performance to achieve blend. For instance, timbral adjustments could be oriented towards the leading performer. In order to study such adjustments, pairs of one bassoon and one horn player participated in a performance experiment, which involved several musical and acoustical factors. Performances were evaluated through acoustic measures and behavioral ratings, investigating differences across performer roles as leaders or followers, unison or non-unison intervals, and earlier or later segments of performances. In addition, the acoustical influence of performance room and communication impairment were also investigated. Role assignments affected spectral adjustments in that musicians acting as followers adjusted toward a “darker” timbre (i.e., realized by reducing the frequencies of the main formant or spectral centroid). Notably, these adjustments occurred together with slight reductions in sound level, although this was more apparent for horn than bassoon players. Furthermore, coordination seemed more critical in unison performances and also improved over the course of a performance. These findings compare to similar dependencies found concerning how performers coordinate their timing and suggest that performer roles also determine the nature of adjustments necessary to achieve the common aim of a blended timbre.


2017 ◽  
Author(s):  
Valeria C. Caruso ◽  
Jeff T. Mohl ◽  
Christopher Glynn ◽  
Jungah Lee ◽  
Shawn M. Willett ◽  
...  

ABSTRACTHow the brain preserves information about multiple simultaneous items is poorly understood. We report that single neurons can represent multiple different stimuli by interleaving different signals across time. We record single units in an auditory region, the inferior colliculus, while monkeys localize 1 or 2 simultaneous sounds. During dual-sound trials, we find that some neurons fluctuate between firing rates observed for each single sound, either on a whole-trial or on a sub-trial timescale. These fluctuations are correlated in pairs of neurons, can be predicted by the state of local field potentials prior to sound onset, and, in one monkey, can predict which sound will be reported first. We find corroborating evidence of fluctuating activity patterns in a separate data set involving responses of inferotemporal cortex neurons to multiple visual stimuli. Alternation between activity patterns corresponding to each of multiple items may therefore be a general strategy to enhance the brain processing capacity, potentially linking such disparate phenomena as variable neural firing, neural oscillations, and limits in attentional/memory capacity.


2014 ◽  
Vol 134 ◽  
pp. 89-102 ◽  
Author(s):  
Oliver Thomas

Abstract:This article examines four connected aspects of Phemius' performance in Odyssey 1. The first section examines the poet's unusual technique in relating Phemius' music to other, simultaneous sounds in the ‘soundscape’ of Odysseus' hall. The second argues that the suitors' initial dancing develops into a theme of appropriate and inappropriate nimbleness which, in particular, creates significant connections between books 1 and 22. The third section shows that the poet is suggestive but studiedly vague on the politics of Phemius' first song which, in the final section, I interpret as a self-reflexive and open-ended ‘lesson’ in how to read epic.


2013 ◽  
Vol 31 (1) ◽  
pp. 46-58 ◽  
Author(s):  
Ben Duane

This study uses a corpus of excerpts from eighteenth- and early nineteenth-century string quartets to examine how four acoustic cues—onset and offset synchrony, pitch comodulation, and spectral overlap—help to afford the perception of auditory streams. Two types of streams are dealt with: textural streams, which house individual string parts or groups of them that function as single musical units; and music streams, which typically house the music as a whole and distinguish it from other simultaneous sounds of music. The corpus contained real excerpts from classical string quartets as well as synthesized excerpts in which lines from two different quartets were combined. Both the author and ten survey respondents analyzed the corpus, identifying likely textural streams. Each of the four acoustic cues was modeled computationally, in order to assess its prevalence in textural and music streams found in the corpus. The results suggested that some cues are more important than others in establishing textural streams, music streams, or both.


Sign in / Sign up

Export Citation Format

Share Document