acoustic instruments
Recently Published Documents


TOTAL DOCUMENTS

97
(FIVE YEARS 22)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Maxime Geoffroy ◽  
Tom Langbehn ◽  
Pierre Priou ◽  
Øystein Varpe ◽  
Geir Johnsen ◽  
...  

AbstractIn situ observations of pelagic fish and zooplankton with optical instruments usually rely on external light sources. However, artificial light may attract or repulse marine organisms, which results in biased measurements. It is often assumed that most pelagic organisms do not perceive the red part of the visible spectrum and that red light can be used for underwater optical measurements of biological processes. Using hull-mounted echosounders above an acoustic probe or a baited video camera, each equipped with light sources of different colours (white, blue and red), we demonstrate that pelagic organisms in Arctic and temperate regions strongly avoid artificial light, including visible red light (575–700 nm), from instruments lowered in the water column. The density of organisms decreased by up to 99% when exposed to artificial light and the distance of avoidance varied from 23 to 94 m from the light source, depending on colours, irradiance levels and, possibly, species communities. We conclude that observations from optical and acoustic instruments, including baited cameras, using light sources with broad spectral composition in the 400–700 nm wavelengths do not capture the real state of the ecosystem and that they cannot be used alone for reliable abundance estimates or behavioural studies.


2021 ◽  
Author(s):  
Rebecca Jane Scarratt ◽  
Ole Adrian Heggli ◽  
Peter Vuust ◽  
Kira Vibe Jespersen

Sleep problems are increasing in modern society. Throughout history, lullabies have been used to soothe the sleep of children, and today, with the increasing accessibility of recorded music, many people report listening to music as a tool to improve sleep. Nevertheless, we know very little about this common human habit. In this study, we elucidate the characteristics of music used for sleep by extracting the features of a large number of tracks (N = 225,927) from 989 sleep playlists retrieved from the global streaming platform Spotify. We found that compared to music in general, music used for sleep is softer and slower; it is more often instrumental (i.e. without lyrics) and played on acoustic instruments. Yet, a large amount of variation was found to be present in sleep music, which clustered into six distinct subgroups. Strikingly, three of these subgroups included popular mainstream tracks that are faster, louder, and more energetic than average sleep music. The findings reveal previously unknown aspects of sleep music and highlight the individual variation in the choice of music for facilitating sleep. By using digital traces, we were able to determine the universal and subgroup characteristics of sleep music in a unique, global dataset. This study can inform the clinical use of music and advance our understanding of how music is used to regulate human behaviour in everyday life.


Author(s):  
Sören Schulze ◽  
Emily J. King

AbstractWe propose an algorithm for the blind separation of single-channel audio signals. It is based on a parametric model that describes the spectral properties of the sounds of musical instruments independently of pitch. We develop a novel sparse pursuit algorithm that can match the discrete frequency spectra from the recorded signal with the continuous spectra delivered by the model. We first use this algorithm to convert an STFT spectrogram from the recording into a novel form of log-frequency spectrogram whose resolution exceeds that of the mel spectrogram. We then make use of the pitch-invariant properties of that representation in order to identify the sounds of the instruments via the same sparse pursuit method. As the model parameters which characterize the musical instruments are not known beforehand, we train a dictionary that contains them, using a modified version of Adam. Applying the algorithm on various audio samples, we find that it is capable of producing high-quality separation results when the model assumptions are satisfied and the instruments are clearly distinguishable, but combinations of instruments with similar spectral characteristics pose a conceptual difficulty. While a key feature of the model is that it explicitly models inharmonicity, its presence can also still impede performance of the sparse pursuit algorithm. In general, due to its pitch-invariance, our method is especially suitable for dealing with spectra from acoustic instruments, requiring only a minimal number of hyperparameters to be preset. Additionally, we demonstrate that the dictionary that is constructed for one recording can be applied to a different recording with similar instruments without additional training.


Author(s):  
Satheeshkumar Jeyaraj ◽  
Balaji Ramakrishnan

Coastal processes are natural processes that operate along coastal zones, resulting in morphological changes in erosion and deposition. The western coast of India is affected by extreme monsoonal wave activity, which can lead to the loss of beaches and the vulnerability of the dunes. As a result, understanding actual near-shore physics and long-shore sediment transport becomes a prerequisite for the development of an effective coastal zone management strategy. The aim of this study is to quantify and investigate longshore sediment flux as a result of wave action based on sediment trap experiments (Kraus 1987). The Kraus (1987) method, along with wave hydrodynamics and current measurements, is performed using acoustic instruments across the surf zone.


2020 ◽  
Vol 27 (4) ◽  
pp. 95-107
Author(s):  
Luan Luiz Gonçalves ◽  
Flávio Luiz Schiavoni

Music has been influenced by digital technology over the last few decades. With the computer and the Digital Musical Instruments, the musical composition could trespass the use of acoustic instruments demanding to musicians and composers a sort of computer programming skills for the development of musical applications. In order to simplify the development of musical applications several tools and musical programming languages arose bringing some facilities to lay-musicians on computer programming to use the computer to make music. This work presents the development of a Visual Programming Language (VPL) to develop DMI applications in the Mosaicode programming environment, simplifying sound design and making the creation of digital instruments more accessible to digital artists. It is also presented the implementation of libmosaic-sound library, which supported the VPL development, for the specific domain of Music Computing and DMI creation.


2020 ◽  
Vol 43 (4) ◽  
pp. 25-40
Author(s):  
Tom Mudd ◽  
Simon Holland ◽  
Paul Mulholland

Nonlinear dynamic processes are fundamental to the behavior of acoustic musical instruments, as is well explored in the case of sound production. Such processes may have profound and under-explored implications for how musicians interact with instruments, however. Although nonlinear dynamic processes are ubiquitous in acoustic instruments, they are present in digital musical tools only if explicitly implemented. Thus, an important resource with potentially major effects on how musicians interact with acoustic instruments is typically absent in the way musicians interact with digital instruments. Twenty-four interviews with free-improvising musicians were conducted to explore the role that nonlinear dynamics play in the participants' musical practices and to understand how such processes can afford distinctive methods of creative exploration. Thematic analysis of the interview data is used to demonstrate the potential for nonlinear dynamic processes to provide repeatable, learnable, controllable, and explorable interactions, and to establish a vocabulary for exploring nonlinear dynamic interactions. Two related approaches to engaging with nonlinear dynamic behaviors are elaborated: edge-like interaction, which involves the creative use of critical thresholds; and deep exploration, which involves exploring the virtually unlimited subtleties of a small control region. The elaboration of these approaches provides an important bridge that connects the concrete descriptions of interaction in musical practices, on the one hand, to the more-abstract mathematical formulation of nonlinear dynamic systems, on the other.


2020 ◽  
Vol 25 (2) ◽  
pp. 187-197
Author(s):  
Cat Hope

A growing number of musicians are recognising the importance of re-thinking notation and its capacity to support contemporary practice. New music is increasingly more collaborative and polystylistic, engaging a greater range of sounds from both acoustic and electronic instruments. Contemporary compositional approaches combine composition, improvisation, found sounds, production and multimedia elements, but common practice music notation has not evolved to reflect these developments. While traditional notations remain the most effective way to communicate information about tempered harmony and the subdivision of metre for acoustic instruments, graphic and animated notations may provide an opportunity for the representation and communication of electronic music. If there is a future for notating electronic music, the micro-tonality, interactivity, non-linear structures, improvisation, aleatoricism and lack of conventional rhythmic structures that are features of it will not be facilitated by common practice notation. This article proposes that graphic and animated notations do have this capacity to serve electronic music, and music that combines electronic and acoustic instruments, as they enable increased input from performers from any musical style, reflect the collaborative practices that are a signpost of current music practice. This article examines some of the ways digitally rendered graphic and animated notations can represent contemporary electronic music-making and foster collaboration between musicians and composers of different musical genres, integrating electronic and acoustic practices.


Sign in / Sign up

Export Citation Format

Share Document