scholarly journals Timbre Comparison in Note Tracking from Onset, Frames and Pitch Estimation

2020 ◽  
Vol 8 ◽  
Author(s):  
Carlos Hernández Oliván ◽  
Ignacio Zay Pinilla ◽  
José Ramón Beltrán Blázquez

Note Tracking (NT) is a subtask of Automatic Music Transcription (AMT) which is a critical problem in the field of Music Information Retrieval (MIR). The aim of this work is to compare the performance of two models, one for onsets and frames prediction and another one with pitch detection and a note tracking algorithm in order to study the behaviour of different timbres and families of instruments in note tracking subtasks.

Electronics ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 810
Author(s):  
Carlos Hernandez-Olivan ◽  
Ignacio Zay Pinilla ◽  
Carlos Hernandez-Lopez ◽  
Jose R. Beltran

Automatic music transcription (AMT) is a critical problem in the field of music information retrieval (MIR). When AMT is faced with deep neural networks, the variety of timbres of different instruments can be an issue that has not been studied in depth yet. The goal of this work is to address AMT transcription by analyzing how timbre affect monophonic transcription in a first approach based on the CREPE neural network and then to improve the results by performing polyphonic music transcription with different timbres with a second approach based on the Deep Salience model that performs polyphonic transcription based on the Constant-Q Transform. The results of the first method show that the timbre and envelope of the onsets have a high impact on the AMT results and the second method shows that the developed model is less dependent on the strength of the onsets than other state-of-the-art models that deal with AMT on piano sounds such as Google Magenta Onset and Frames (OaF). Our polyphonic transcription model for non-piano instruments outperforms the state-of-the-art model, such as for bass instruments, which has an F-score of 0.9516 versus 0.7102. In our latest experiment we also show how adding an onset detector to our model can outperform the results given in this work.


Author(s):  
Josh Weese

Pitch detection and instrument identification can be achieved with relatively high accuracy when considering monophonic signals in music; however, accurately classifying polyphonic signals in music remains an unsolved research problem. Pitch and instrument classification is a subset of Music Information Retrieval (MIR) and automatic music transcription, both having numerous research and real-world applications. Several areas of research are covered in this chapter, including the fast Fourier transform, onset detection, convolution, and filtering. Polyphonic signals with many different voices and frequencies can be exceptionally complex. This chapter presents a new model for representing the spectral structure of polyphonic signals: Uniform MAx Gaussian Envelope (UMAGE). The new spectral envelope precisely approximates the distribution of frequency parts in the spectrum while still being resilient to oscillating rapidly and is able to generalize well without losing the representation of the original spectrum.


Heliyon ◽  
2021 ◽  
Vol 7 (2) ◽  
pp. e06257
Author(s):  
Ennio Idrobo-Ávila ◽  
Humberto Loaiza-Correa ◽  
Rubiel Vargas-Cañas ◽  
Flavio Muñoz-Bolaños ◽  
Leon van Noorden

2020 ◽  
pp. 102986492097216
Author(s):  
Gaelen Thomas Dickson ◽  
Emery Schubert

Background: Music is thought to be beneficial as a sleep aid. However, little research has explicitly investigated the specific characteristics of music that aid sleep and some researchers assume that music described as generically sedative (slow, with low rhythmic activity) is necessarily conducive to sleep, without directly interrogating this assumption. This study aimed to ascertain the features of music that aid sleep. Method: As part of an online survey, 161 students reported the pieces of music they had used to aid sleep, successfully or unsuccessfully. The participants reported 167 pieces, some more often than others. Nine features of the pieces were analyzed using a combination of music information retrieval methods and aural analysis. Results: Of the pieces reported by participants, 78% were successful in aiding sleep. The features they had in common were that (a) their main frequency register was middle range frequencies; (b) their tempo was medium; (c) their articulation was legato; (d) they were in the major mode, and (e) lyrics were present. They differed from pieces that were unsuccessful in aiding sleep in that (a) their main frequency register was lower; (b) their articulation was legato, and (c) they excluded high rhythmic activity. Conclusion: Music that aids sleep is not necessarily sedative music, as defined in the literature, but some features of sedative music are associated with aiding sleep. In the present study, we identified the specific features of music that were reported to have been successful and unsuccessful in aiding sleep. The identification of these features has important implications for the selection of pieces of music used in research on sleep.


Sign in / Sign up

Export Citation Format

Share Document