Temporal convolutional networks for musical audio beat tracking

Author(s):  
E. P. MatthewDavies ◽  
Sebastian Bock
2007 ◽  
Vol 15 (3) ◽  
pp. 1009-1020 ◽  
Author(s):  
Matthew E. P. Davies ◽  
Mark D. Plumbley

Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1518
Author(s):  
António S. Pinto ◽  
Sebastian Böck ◽  
Jaime S. Cardoso ◽  
Matthew E. P. Davies

The extraction of the beat from musical audio signals represents a foundational task in the field of music information retrieval. While great advances in performance have been achieved due the use of deep neural networks, significant shortcomings still remain. In particular, performance is generally much lower on musical content that differs from that which is contained in existing annotated datasets used for neural network training, as well as in the presence of challenging musical conditions such as rubato. In this paper, we positioned our approach to beat tracking from a real-world perspective where an end-user targets very high accuracy on specific music pieces and for which the current state of the art is not effective. To this end, we explored the use of targeted fine-tuning of a state-of-the-art deep neural network based on a very limited temporal region of annotated beat locations. We demonstrated the success of our approach via improved performance across existing annotated datasets and a new annotation-correction approach for evaluation. Furthermore, we highlighted the ability of content-specific fine-tuning to learn both what is and what is not the beat in challenging musical conditions.


2007 ◽  
Vol 36 (1) ◽  
pp. 1-16 ◽  
Author(s):  
M. F. McKinney ◽  
D. Moelants ◽  
M. E. P. Davies ◽  
A. Klapuri
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document