Music in All Its Beauty

2021 ◽  
pp. 166-169
Author(s):  
Elvira Brattico ◽  
Vinoo Alluri

This chapter provides a behind-the-scenes account of the birth of a naturalistic approach to the neuroscience of the musical aesthetic experience. The story starts from a lab talk giving the inspiration to translate the naturalistic paradigm initially applied to neuroimaging studies of the visual domain into music research. The circumstantial co-presence of neuroscientists and computational musicologists at the same center did the trick, permitting the identification of controlled variables for brain signal processing from the automatic extraction of the acoustic features of real music. This approach is now well accepted by the music neuroscience community while still waiting for full exploitation by aesthetic research.

2014 ◽  
Vol 606 ◽  
pp. 111-114
Author(s):  
Jan Holub ◽  
Bastien Desbos ◽  
Vítězslav Vacek ◽  
Jiří Kolísko

A new method to determine acoustic absorption in-situ is described in this paper. For practical consideration, this measurement can be performed on small samples (square of 60x60cm). This paper describes all the steps needed to obtain results: device setup, recording, signal processing.


2022 ◽  
Vol 3 (4) ◽  
pp. 295-307
Author(s):  
Subarna Shakya

Personal computer-based data collection and analysis systems may now be more resilient due to the recent advances in digital signal processing technology. The signal processing approach known as Speaker Recognition, uses the specific information contained in voice waves to automatically identify the speaker. For a single source, this study examines systems that can recognize a wide range of emotional states in speech. Since it offers insight into human brain states, it's a hot issue in the development during the interface between human and computer arrangement for speech processing. Mostly, it is necessary to recognize the emotional state of people in the arrangement. This research analyses an effort to discern various emotional stages such as anger, joy, neutral, fear and sadness by classification methods. The acoustic feature, a measure of unpredictability, is used in conjunction with a non-linear signal quantification approach to identify emotions. The unpredictability of all the emotional signals is included in a feature vector constructed from the calculated entropy measurements. In the next step, the acoustic features through speech signal are used for the training in the proposed neural network that are given to linear discriminator analysis approach for further greater classification with acoustic feature extraction. Besides, this research article compares the proposed work with various modern classifiers such as K- nearest neighbor, support vector machine and linear discriminator approach. Moreover, this proposed algorithm is based on acoustic features in Linear Discriminant Analysis (LDA) with acoustic feature extraction machine algorithm. The great advantage of this proposed algorithm is that it separates negative and positive features of emotions and provides good results during classification. According to the results from efficient cross-validation in the proposed framework, accessible sample of dataset of Emotional Speech, a single-source LDA classifier can recognize emotions in speech signals with above 90 percent of accuracy for various emotional stages.


2019 ◽  
Author(s):  
Fábio Gorodscy ◽  
Guilherme Feulo ◽  
Nicolas Figueiredo ◽  
Paulo Vitor Itaboraí ◽  
Roberto Bodo ◽  
...  

The following report presents some of the ongoing projects that are taking place in the group’s laboratory. One of the noteable characteristics of this group is the extensive research spectrum, the plurality of research areas that are being studied by it’s members, such as Music Information Retrieval, Signal Processing and New Interfaces for Musical Expression.


Author(s):  
Jean-Luc Starck ◽  
Fionn Murtagh ◽  
Jalal Fadili
Keyword(s):  

1996 ◽  
Vol 8 (1) ◽  
pp. 233-247
Author(s):  
S. Mandayam ◽  
L. Udpa ◽  
S. S. Udpa ◽  
W. Lord

Sign in / Sign up

Export Citation Format

Share Document