music classification
Recently Published Documents


TOTAL DOCUMENTS

191
(FIVE YEARS 58)

H-INDEX

16
(FIVE YEARS 4)

Author(s):  
Shivam Sakore

Abstract: In this era of technological advances, text-based music recommendations are much needed as they will help humans relieve stress with soothing music according to their moods. In this project, we have implemented a chatbot that recommends music based on the user's text tone. By analyzing the tone of the text expressed by the user, we can identify the mood. Once the mood is identified, the application will play songs in the form of a web page based on the user's choice as well as his current mood. In our proposed system, themain goal is to reliably determine a user's mood based on their text tone with an application that can be installed on the user's desktop. In today's world, human computer interaction (HCI) plays a crucial role, and the most popular concept in HCI is recognition of emotion from text. As part of this process, the frontal view of the user's text is used to determine the mood. The extraction of text tone from the user's text is anotherimportant aspect. We have used IBM Analyser to check the text tone of the user and to predict the mood based on the text of the user, and Last.FM API to recommend songs based on themood of the user. Keywords: Introduction, Product-Architecture, Tone Analyzer, Music Classification Based on Mood, Acoustic Analysis, Experiment, Future/Current Use, Importance, Background, Literature Survey, Methodology, Equations, Planning, Tools and Technology, Conclusion.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Fang Zhang

With the advent of the digital music era, digital audio sources have exploded. Music classification (MC) is the basis of managing massive music resources. In this paper, we propose a MC method based on deep learning to improve feature extraction and classifier design based on MIDI (musical instrument digital interface) MC task. Considering that the existing classification technology is limited by the shallow structure, it is difficult for the classifier to learn the time sequence and semantic information of music; this paper proposes a MIDIMC method based on deep learning. In the experiment, we use the MC method proposed in this paper to achieve 90.1% classification accuracy, which is better than the existing classification method based on BP neural network, and verify the music with its classification accuracy. The key point is that the music division method used in this paper has correct MC efficiency. However, due to the limited ability and time involved in the interdisciplinary field, the methodology of this paper has certain limitations, which still needs further research and improvement.


2021 ◽  
Author(s):  
Tiancheng Yang ◽  
Shah Nazir

Abstract With the development and advancement of information technology, artificial intelligence (AI) and machine learning (ML) are applied in every sector of life. Among these applications, music is one of them which has gained attention in the last couple of years. The music industry is revolutionized with AIbased innovative and intelligent techniques. It is very convenient for composers to compose music of high quality using these technologies. Artificial intelligence and Music (AIM) is one of the emerging fields used to generate and manage sounds for different media like the Internet, games, etc. Sounds in the games are very effective and can be made more attractive by implementing AI approaches. The quality of sounds in the game directly impacts the productivity and experience of the player. With computer-assisted technologies, the game designers can create sounds for different scenarios or situations like horror and suspense and provide gamer information. The practical and productive audio of a game can guide visually impaired people during other events in the game. For the better creation and composition of music, good quality of knowledge about musicology is essential. Due to AIM, there are a lot of intelligent and interactive tools available for the efficiency and effective learning of music. The learners can be provided with a very reliable and interactive environment based on artificial intelligence. The current study has considered presenting a detailed overview of the literature available in the area of research. The study has demonstrated literature analysis from various perspectives, which will become evidence for researchers to devise novel solutions in the field.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Kedong Zhang

The music style classification technology can add style tags to music based on the content. When it comes to researching and implementing aspects like efficient organization, recruitment, and music resource recommendations, it is critical. Traditional music style classification methods use a wide range of acoustic characteristics. The design of characteristics necessitates musical knowledge and the characteristics of various classification tasks are not always consistent. The rapid development of neural networks and big data technology has provided a new way to better solve the problem of music-style classification. This paper proposes a novel method based on music extraction and deep neural networks to address the problem of low accuracy in traditional methods. The music style classification algorithm extracts two types of features as classification characteristics for music styles: timbre and melody features. Because the classification method based on a convolutional neural network ignores the audio’s timing. As a result, we proposed a music classification module based on the one-dimensional convolution of a recurring neuronal network, which we combined with single-dimensional convolution and a two-way, recurrent neural network. To better represent the music style properties, different weights are applied to the output. The GTZAN data set was also subjected to comparison and ablation experiments. The test results outperformed a number of other well-known methods, and the rating performance was competitive.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jie Gan

With the advancement of multimedia and digital technologies, music resources are rapidly increasing over the Internet, which changed listeners’ habits from hard drives to online music platforms. It has allowed the researchers to use classification technologies for efficient storage, organization, retrieval, and recommendation of music resources. The traditional music classification methods use many artificially designed acoustic features, which require knowledge in the music field. The features of different classification tasks are often not universal. This paper provides a solution to this problem by proposing a novel recurrent neural network method with a channel attention mechanism for music feature classification. The music classification method based on a convolutional neural network ignores the timing characteristics of the audio itself. Therefore, this paper combines convolution structure with the bidirectional recurrent neural network and uses the attention mechanism to assign different attention weights to the output of the recurrent neural network at different times; the weights are assigned for getting a better representation of the overall characteristics of the music. The classification accuracy of the model on the GTZAN data set has increased to 93.1%. The AUC on the multilabel labeling data set MagnaTagATune has reached 92.3%, surpassing other comparison methods. The labeling of different music labels has been analyzed. This method has good labeling ability for most of the labels of music genres. Also, it has good performance on some labels of musical instruments, singing, and emotion categories.


Author(s):  
Huy Phan ◽  
Huy Le Nguyen ◽  
Oliver Y. Chen ◽  
Lam Pham ◽  
Philipp Koch ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document