The Influence of the Development of Computer Music Information on Piano Education

Author(s):  
Xiang Zhu
2019 ◽  
Author(s):  
Fábio Gorodscy ◽  
Guilherme Feulo ◽  
Nicolas Figueiredo ◽  
Paulo Vitor Itaboraí ◽  
Roberto Bodo ◽  
...  

The following report presents some of the ongoing projects that are taking place in the group’s laboratory. One of the noteable characteristics of this group is the extensive research spectrum, the plurality of research areas that are being studied by it’s members, such as Music Information Retrieval, Signal Processing and New Interfaces for Musical Expression.


2021 ◽  
Vol 27 (4) ◽  
pp. 139-142
Author(s):  
Tiago Fernandes Tavares ◽  
Flávio Luiz Schiavoni

The Brazilian Symposia on Computer Music are events that foster a rich environment for exciting interdisciplinary discussion. In its 17th edition, in 2019, the event was held in São João Del Rei, MG. This special issue presents 5 selected papers from the conference's technical program covering different research fields like sound synthesis, music information retrieval, sound systems, and digital musical instruments.


2021 ◽  
Vol 11 (13) ◽  
pp. 5913
Author(s):  
Zhuang He ◽  
Yin Feng

Automatic singing transcription and analysis from polyphonic music records are essential in a number of indexing techniques for computational auditory scenes. To obtain a note-level sequence in this work, we divide the singing transcription task into two subtasks: melody extraction and note transcription. We construct a salience function in terms of harmonic and rhythmic similarity and a measurement of spectral balance. Central to our proposed method is the measurement of melody contours, which are calculated using edge searching based on their continuity properties. We calculate the mean contour salience by separating melody analysis from the adjacent breakpoint connective strength matrix, and we select the final melody contour to determine MIDI notes. This unique method, combining audio signals with image edge analysis, provides a more interpretable analysis platform for continuous singing signals. Experimental analysis using Music Information Retrieval Evaluation Exchange (MIREX) datasets shows that our technique achieves promising results both for audio melody extraction and polyphonic singing transcription.


Heliyon ◽  
2021 ◽  
Vol 7 (2) ◽  
pp. e06257
Author(s):  
Ennio Idrobo-Ávila ◽  
Humberto Loaiza-Correa ◽  
Rubiel Vargas-Cañas ◽  
Flavio Muñoz-Bolaños ◽  
Leon van Noorden

2020 ◽  
pp. 102986492097216
Author(s):  
Gaelen Thomas Dickson ◽  
Emery Schubert

Background: Music is thought to be beneficial as a sleep aid. However, little research has explicitly investigated the specific characteristics of music that aid sleep and some researchers assume that music described as generically sedative (slow, with low rhythmic activity) is necessarily conducive to sleep, without directly interrogating this assumption. This study aimed to ascertain the features of music that aid sleep. Method: As part of an online survey, 161 students reported the pieces of music they had used to aid sleep, successfully or unsuccessfully. The participants reported 167 pieces, some more often than others. Nine features of the pieces were analyzed using a combination of music information retrieval methods and aural analysis. Results: Of the pieces reported by participants, 78% were successful in aiding sleep. The features they had in common were that (a) their main frequency register was middle range frequencies; (b) their tempo was medium; (c) their articulation was legato; (d) they were in the major mode, and (e) lyrics were present. They differed from pieces that were unsuccessful in aiding sleep in that (a) their main frequency register was lower; (b) their articulation was legato, and (c) they excluded high rhythmic activity. Conclusion: Music that aids sleep is not necessarily sedative music, as defined in the literature, but some features of sedative music are associated with aiding sleep. In the present study, we identified the specific features of music that were reported to have been successful and unsuccessful in aiding sleep. The identification of these features has important implications for the selection of pieces of music used in research on sleep.


Author(s):  
Dimitrios Rafailidis ◽  
Alexandros Nanopoulos ◽  
Yannis Manolopoulos

In popular music information retrieval systems, users have the opportunity to tag musical objects to express their personal preferences, thus providing valuable insights about the formulation of user groups/communities. In this article, the authors focus on the analysis of social tagging data to reveal coherent groups characterized by their users, tags and music objects (e.g., songs and artists), which allows for the expression of discovered groups in a multi-aspect way. For each group, this study reveals the most prominent users, tags, and music objects using a generalization of the popular web-ranking concept in the social data domain. Experimenting with real data, the authors’ results show that each Tag-Aware group corresponds to a specific music topic, and additionally, a three way ranking analysis is performed inside each group. Building Tag-Aware groups is crucial to offer ways to add structure in the unstructured nature of tags.


Sign in / Sign up

Export Citation Format

Share Document