Automatic Music Generation

2019 ◽  
Vol 7 (3) ◽  
pp. 80-82
Author(s):  
Lawakesh Patel ◽  
Nidhi Singh ◽  
Rizwan Khan
Author(s):  
Abigail Wiafe ◽  
Pasi Fränti

Affective algorithmic composition systems are emotionally intelligent automatic music generation systems that explore the current emotions or mood of a listener and compose an affective music to alter the person's mood to a predetermined one. The fusion of affective algorithmic composition systems and smart spaces have been identified to be beneficial. For instance, studies have shown that they can be used for therapeutic purposes. Amidst these benefits, research on its related security and ethical issues is lacking. This chapter therefore seeks to provoke discussion on security and ethical implications of using affective algorithmic compositions systems in smart spaces. It presents issues such as impersonation, eavesdropping, data tempering, malicious codes, and denial-of-service attacks associated with affective algorithmic composition systems. It also discusses some ethical implications relating to intensions, harm, and possible conflicts that users of such systems may experience.


Author(s):  
Bryan Wang ◽  
Yi-Hsuan Yang

Music creation is typically composed of two parts: composing the musical score, and then performing the score with instruments to make sounds. While recent work has made much progress in automatic music generation in the symbolic domain, few attempts have been made to build an AI model that can render realistic music audio from musical scores. Directly synthesizing audio with sound sample libraries often leads to mechanical and deadpan results, since musical scores do not contain performance-level information, such as subtle changes in timing and dynamics. Moreover, while the task may sound like a text-to-speech synthesis problem, there are fundamental differences since music audio has rich polyphonic sounds. To build such an AI performer, we propose in this paper a deep convolutional model that learns in an end-to-end manner the score-to-audio mapping between a symbolic representation of music called the pianorolls and an audio representation of music called the spectrograms. The model consists of two subnets: the ContourNet, which uses a U-Net structure to learn the correspondence between pianorolls and spectrograms and to give an initial result; and the TextureNet, which further uses a multi-band residual network to refine the result by adding the spectral texture of overtones and timbre. We train the model to generate music clips of the violin, cello, and flute, with a dataset of moderate size. We also present the result of a user study that shows our model achieves higher mean opinion score (MOS) in naturalness and emotional expressivity than a WaveNet-based model and two off-the-shelf synthesizers. We open our source code at https://github.com/bwang514/PerformanceNet


Mathematics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 387
Author(s):  
Shuyu Li ◽  
Yunsick Sung

Deep learning has made significant progress in the field of automatic music generation. At present, the research on music generation via deep learning can be divided into two categories: predictive models and generative models. However, both categories have the same problems that need to be resolved. First, the length of the music must be determined artificially prior to generation. Second, although the convolutional neural network (CNN) is unexpectedly superior to the recurrent neural network (RNN), CNN still has several disadvantages. This paper proposes a conditional generative adversarial network approach using an inception model (INCO-GAN), which enables the generation of complete variable-length music automatically. By adding a time distribution layer that considers sequential data, CNN considers the time relationship in a manner similar to RNN. In addition, the inception model obtains richer features, which improves the quality of the generated music. In experiments conducted, the music generated by the proposed method and that by human composers were compared. High cosine similarity of up to 0.987 was achieved between the frequency vectors, indicating that the music generated by the proposed method is very similar to that created by a human composer.


2020 ◽  
Author(s):  
Jiyanbo Cao ◽  
Jinan Fiaidhi ◽  
Maolin Qi

This paper has reviewed the deep learning techniques which used in music generation. The research was based on <i>Sageev Oore's</i> proposed LSTM based recurrent neural network (Performance RNN). We have study the history of automatic music generation, and now we are using a state of the art techniques to achieve this mission. We have conclude the process of making a MIDI file to a structure as input of Performance RNN and the network structure of it.


Author(s):  
Prof. Amita Suke ◽  
Prof. Khemutai Tighare ◽  
Yogeshwari Kamble

The music lyrics that we generally listen are human written and no machine involvement is present. Writing music has never been easy task, lot of challenges are involved to write because the music lyrics need to be meaningful and at the same time it needs to be in harmony and synchronised with the music being play over it. They are written by experienced artist who have been writing music lyrics form long time. This project tries to automate music lyrics generation using computerized program and deep learning which we produce lyrics and reduce the load on human skills and may generate new lyrics and a really faster rate than humans ever can. This project will generate the music with the assistance of human and AI


2020 ◽  
Author(s):  
Jiyanbo Cao ◽  
Jinan Fiaidhi ◽  
Maolin Qi

This paper has reviewed the deep learning techniques which used in music generation. The research was based on <i>Sageev Oore's</i> proposed LSTM based recurrent neural network (Performance RNN). We have study the history of automatic music generation, and now we are using a state of the art techniques to achieve this mission. We have conclude the process of making a MIDI file to a structure as input of Performance RNN and the network structure of it.


2021 ◽  
Author(s):  
SeyyedPooya HekmatiAthar ◽  
Mohd Anwar

Abstract The art nature of music makes it difficult, if not impossible, to extract solid rules from composed pieces and express them mathematically. This has led to the lack of utilization of music expert knowledge in the AI literature for automation of music composition. In this study, we employ intervals, which are the building blocks of music, to represent musical data closer to human composers’ perspectives. Based on intervals, we developed and trained OrchNet which translates musical data into and from numerical vector representation. Another model called CompoNet is developed and trained to generate music. Using intervals and a novel monitor-and-inject mechanism, we address the two main limitations of the literature: lack of orchestration and lack of long-term memory. The music generated by CompoNet is evaluated by Turing Test: whether human judges can tell the difference between the music pieces composed by humans versus. generated by our system. The Turing test results were compared using Mann-Whitney U Test, and there was no statistically significant difference between human-composed music versus what our system has generated.


Sign in / Sign up

Export Citation Format

Share Document