automatic music transcription
Recently Published Documents


TOTAL DOCUMENTS

76
(FIVE YEARS 20)

H-INDEX

7
(FIVE YEARS 2)

2021 ◽  
Vol 6 (68) ◽  
pp. 3391
Author(s):  
Yu-Te Wu ◽  
Yin-Jyun Luo ◽  
Tsung-Ping Chen ◽  
I-Chieh Wei ◽  
Jui-Yang Hsu ◽  
...  

Author(s):  
Carlos de la Fuente ◽  
Jose J. Valero-Mas ◽  
Francisco J. Castellanos ◽  
Jorge Calvo-Zaragoza

AbstractOptical Music Recognition (OMR) and Automatic Music Transcription (AMT) stand for the research fields that aim at obtaining a structured digital representation from sheet music images and acoustic recordings, respectively. While these fields have traditionally evolved independently, the fact that both tasks may share the same output representation poses the question of whether they could be combined in a synergistic manner to exploit the individual transcription advantages depicted by each modality. To evaluate this hypothesis, this paper presents a multimodal framework that combines the predictions from two neural end-to-end OMR and AMT systems by considering a local alignment approach. We assess several experimental scenarios with monophonic music pieces to evaluate our approach under different conditions of the individual transcription systems. In general, the multimodal framework clearly outperforms the single recognition modalities, attaining a relative improvement close to $$40\%$$ 40 % in the best case. Our initial premise is, therefore, validated, thus opening avenues for further research in multimodal OMR-AMT transcription.


2021 ◽  
Author(s):  
Yuto Ozaki ◽  
John Mcbride ◽  
Emmanouil Benetos ◽  
Peter Pfordresher ◽  
Joren Six ◽  
...  

Cross-cultural musical analysis requires standardized symbolic representation of sounds such as score notation. However, transcription into notation is usually conducted manually by ear, which is time-consuming and subjective. Our aim is to evaluate the reliability of existing methods for transcribing songs from diverse societies. We had 3 experts independently transcribe a sample of 32 excerpts of traditional monophonic songs from around the world (half a cappella, half with instrumental accompaniment). 16 songs also had pre-existing transcriptions created by 3 different experts. We compared these human transcriptions against one another and against 10 automatic music transcription algorithms. We found that human transcriptions can be sufficiently reliable (~90% agreement, κ ~.7), but current automated methods are not (<60% agreement, κ <.4). No automated method clearly outperformed others, in contrast to our predictions. These results suggest that improving automated methods for cross-cultural music transcription is critical for diversifying MIR.


Electronics ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 810
Author(s):  
Carlos Hernandez-Olivan ◽  
Ignacio Zay Pinilla ◽  
Carlos Hernandez-Lopez ◽  
Jose R. Beltran

Automatic music transcription (AMT) is a critical problem in the field of music information retrieval (MIR). When AMT is faced with deep neural networks, the variety of timbres of different instruments can be an issue that has not been studied in depth yet. The goal of this work is to address AMT transcription by analyzing how timbre affect monophonic transcription in a first approach based on the CREPE neural network and then to improve the results by performing polyphonic music transcription with different timbres with a second approach based on the Deep Salience model that performs polyphonic transcription based on the Constant-Q Transform. The results of the first method show that the timbre and envelope of the onsets have a high impact on the AMT results and the second method shows that the developed model is less dependent on the strength of the onsets than other state-of-the-art models that deal with AMT on piano sounds such as Google Magenta Onset and Frames (OaF). Our polyphonic transcription model for non-piano instruments outperforms the state-of-the-art model, such as for bass instruments, which has an F-score of 0.9516 versus 0.7102. In our latest experiment we also show how adding an onset detector to our model can outperform the results given in this work.


2020 ◽  
Vol 8 ◽  
Author(s):  
Carlos Hernández Oliván ◽  
Ignacio Zay Pinilla ◽  
José Ramón Beltrán Blázquez

Note Tracking (NT) is a subtask of Automatic Music Transcription (AMT) which is a critical problem in the field of Music Information Retrieval (MIR). The aim of this work is to compare the performance of two models, one for onsets and frames prediction and another one with pitch detection and a note tracking algorithm in order to study the behaviour of different timbres and families of instruments in note tracking subtasks.


Sign in / Sign up

Export Citation Format

Share Document