A Probabilistic Model for Music Recommendation Considering Audio Features

Author(s):  
Qing Li ◽  
Sung Hyon Myaeng ◽  
Dong Hai Guan ◽  
Byeong Man Kim
Author(s):  
Madhuri Athavle ◽  

We propose a new approach for playing music automatically using facial emotion. Most of the existing approaches involve playing music manually, using wearable computing devices, or classifying based on audio features. Instead, we propose to change the manual sorting and playing. We have used a Convolutional Neural Network for emotion detection. For music recommendations, Pygame & Tkinter are used. Our proposed system tends to reduce the computational time involved in obtaining the results and the overall cost of the designed system, thereby increasing the system’s overall accuracy. Testing of the system is done on the FER2013 dataset. Facial expressions are captured using an inbuilt camera. Feature extraction is performed on input face images to detect emotions such as happy, angry, sad, surprise, and neutral. Automatically music playlist is generated by identifying the current emotion of the user. It yields better performance in terms of computational time, as compared to the algorithm in the existing literature.


Electronics ◽  
2021 ◽  
Vol 10 (15) ◽  
pp. 1769
Author(s):  
Shu Wang ◽  
Chonghuan Xu ◽  
Austin Shijun Ding ◽  
Zhongyun Tang

Emotion-aware music recommendations has gained increasing attention in recent years, as music comes with the ability to regulate human emotions. Exploiting emotional information has the potential to improve recommendation performances. However, conventional studies identified emotion as discrete representations, and could not predict users’ emotional states at time points when no user activity data exists, let alone the awareness of the influences posed by social events. In this study, we proposed an emotion-aware music recommendation method using deep neural networks (emoMR). We modeled a representation of music emotion using low-level audio features and music metadata, model the users’ emotion states using an artificial emotion generation model with endogenous factors exogenous factors capable of expressing the influences posed by events on emotions. The two models were trained using a designed deep neural network architecture (emoDNN) to predict the music emotions for the music and the music emotion preferences for the users in a continuous form. Based on the models, we proposed a hybrid approach of combining content-based and collaborative filtering for generating emotion-aware music recommendations. Experiment results show that emoMR performs better in the metrics of Precision, Recall, F1, and HitRate than the other baseline algorithms. We also tested the performance of emoMR on two major events (the death of Yuan Longping and the Coronavirus Disease 2019 (COVID-19) cases in Zhejiang). Results show that emoMR takes advantage of event information and outperforms other baseline algorithms.


2021 ◽  
Author(s):  
Xiaoliang Gong ◽  
Ruiyi Yuan ◽  
Hui Qian ◽  
Yufei Chen ◽  
Anthony G. Cohn

Chinese traditional music has been proved to be effective in emotion regulation for thousands of years. Five different groups of Chinese traditional music which have been proved can regulate different emotions (Angry, Depressed, Feverish, Desperate, Sorrowful) in the literature. 54 audios features are extracted by using the Librosa library for each music group. Five features are manually selected using histogram analysis which show significant difference between the five groups of music. Combined with KNN, SVM and Deep forest classification algorithms, the five manually selected audio features are shown to have better classification performance than traditional feature selection algorithms, like PCA and LDA. We hypothesize that these five significant audio features may be the underlying basis why so such music can effectively perform emotion regulation. Based on this classification models, prototype emotion regulation music recommendation interface (TJ-ERMR) was built that can be used for music therapy. In the future, we will use this classification model to find more music to expand the initial repertoire of our music recommendation system.


2018 ◽  
Vol 18 (1) ◽  
pp. 31-43 ◽  
Author(s):  
Rodrigo Carvalho Borges ◽  
Marcelo Gomes de Queiroz

Recommending music automatically isn’t simply about finding songs similar to what a user is accustomed to listen, but also about suggesting potentially interesting pieces that bear no obvious relationships to a user listen- ing history. This work addresses the problem known as “cold start”, where new songs with no user listening history are added to an existing dataset, and proposes a probabilistic model for inference of users listening interest on newly added songs based on acoustic content and implicit listening feedback. Experiments using a dataset of selected Bra- zilian popular music show that the proposed method compares favorably to alternative statistical models. 


2021 ◽  
pp. 1-12
Author(s):  
Lige Zhang ◽  
Zhen Tian

Aerobics is full of charm, and music plays an inestimable role in it. With the penetration of music in aerobics, the “sound” of music art is introduced into the “shape” of aerobics movements, and the visual art and visual experience are perfectly combined, which greatly expands the extension and extension of aerobics. This paper proposes an aerobics music adaptation recommendation algorithm that combines classification and collaborative filtering. First, by calculating the similarity of the user context information, the collaborative filtering algorithm obtains the initial annihilation grass music recommendation list; then the classification model is trained by the machine learning algorithm to obtain the user’s aerobics music type preference in a specific context; finally, collaborative filtering The obtained recommendation list is integrated with the aerobics music preference obtained by the classification model to provide personalized aerobics music adaptation recommendations for users in specific situations. In the specific aerobics music adaptation recommendation, the algorithm is implemented by a deep neural network composed of an independent cyclic neural network algorithm and an attention mechanism. In the data preprocessing stage, the audio of the user’s listening history is preprocessed by scattering transformation. The audio features of the user’s listening history are extracted by scattering transformation, and then this feature is combined with the user’s portrait to obtain a recommendation list through an independent recurrent neural network with a hybrid attention mechanism. The experimental results show that this method can effectively improve the performance of the personalized music recommendation system. Compared with the traditional single algorithm IndRNN and LSTM, the recommendation accuracy is improved by 7.8% and 20.9%, respectively.


Sign in / Sign up

Export Citation Format

Share Document