audio feature extraction
Recently Published Documents


TOTAL DOCUMENTS

42
(FIVE YEARS 13)

H-INDEX

5
(FIVE YEARS 0)

Informatica ◽  
2022 ◽  
Vol 45 (7) ◽  
Author(s):  
Wala'a Nsaif Jasim ◽  
Saba Abdual Wahid Saddam ◽  
Esra'a Jasem Harfash

2021 ◽  
Vol 7 (5) ◽  
pp. 4799-4809
Author(s):  
Zhang Jing

Objectives: With the continuous progress of information technology, multimedia teaching forms with a large number of emerging educational technology development carry more audio-visual information, with rich pictures, demonstration images, audio-visual integration characteristics, which has been widely used in the process of music teaching in Colleges and universities. Methods: And it has gradually become the main teaching mode of music teaching in schools, and have been increasingly popular. Therefore, the development of modern music discipline and the reform of music teaching in schools are analyzed to find a suitable way for music teaching in schools, so as to gradually improve the development speed of music education in China. Results: The study based on digital audio related technology contained in CAT-based Solfeggio and ear training system provides an example to demonstrate the message mechanism. Audio feature extraction and matching technology are also discussed. Several types of audio feature extraction methods and their features are analyzed. Conclusion: The design of the whole framework and the realization of the core algorithm of Solfeggio and ear training learning assistant system are completed. Tests show that the algorithm can be well applied in the system, while making the system more flexible and scalable, and can be further extended.


Author(s):  
Jacek Grekow

AbstractThe article presents conducted experiments using recurrent neural networks for emotion detection in musical segments. Trained regression models were used to predict the continuous values of emotions on the axes of Russell’s circumplex model. A process of audio feature extraction and creating sequential data for learning networks with long short-term memory (LSTM) units is presented. Models were implemented using the WekaDeeplearning4j package and a number of experiments were carried out with data with different sets of features and varying segmentation. The usefulness of dividing the data into sequences as well as the point of using recurrent networks to recognize emotions in music, the results of which have even exceeded the SVM algorithm for regression, were demonstrated. The author analyzed the effect of the network structure and the set of used features on the results of the regressors recognizing values on two axes of the emotion model: arousal and valence. Finally, the use of a pretrained model for processing audio features and training a recurrent network with new sequences of features is presented.


2020 ◽  
Author(s):  
Raphael Lenain ◽  
Jack Weston ◽  
Abhishek Shivkumar ◽  
Emil Fristed

Author(s):  
Jose Alvaro Luna-Gonzalez ◽  
Daniel Robles-Camarillo ◽  
Mariko Nakano-Miyatake ◽  
Humberto Lanz-Mendoza ◽  
Hector Perez-Meana

In this paper, a classification of mosquito’s specie is performed using mosquito wingbeats samples obtained by optical sensor. Six world-wide representative species of mosquitos, which are Aedes aegypti, Aedes albopictus, Anopheles arabiensis, Anopheles gambiae and Culex pipiens, Culex quinquefasciatus, are considered for classification. A total of 60,000 samples are divided equally in each specie mentioned above. In total, 25 audio feature extraction algorithms are applied to extract 39 feature values per sample. Further, each audio feature is transformed to a color image, which shows audio features presenting by different pixel values. We used a fully connected neural networks for audio features and a convolutional neural network (CNN) for image dataset generated from audio features. The CNN-based classifier shows 90.75% accuracy, which outperforms the accuracy of 87.18% obtained by the first classifier using directly audio features.


Sign in / Sign up

Export Citation Format

Share Document