scholarly journals FaMuRa: Music Generation System using Unconstrained Body Motion

Author(s):  
Yuta Shimomichi
Author(s):  
Prof. Amita Suke ◽  
Prof. Khemutai Tighare ◽  
Yogeshwari Kamble

The music lyrics that we generally listen are human written and no machine involvement is present. Writing music has never been easy task, lot of challenges are involved to write because the music lyrics need to be meaningful and at the same time it needs to be in harmony and synchronised with the music being play over it. They are written by experienced artist who have been writing music lyrics form long time. This project tries to automate music lyrics generation using computerized program and deep learning which we produce lyrics and reduce the load on human skills and may generate new lyrics and a really faster rate than humans ever can. This project will generate the music with the assistance of human and AI


IARJSET ◽  
2019 ◽  
Vol 6 (5) ◽  
pp. 47-54
Author(s):  
Sanidhya Mangal ◽  
Rahul Modak ◽  
Poorva Joshi

Author(s):  
Shuai Chen ◽  
◽  
Yoichiro Maeda ◽  
Yasutake Takahashi

In research on interactive music generation, we propose a music generation method in which the computer generates music under the recognition of a humanmusic conductor’s gestures. In this research, generated music is tuned by parameters of a network of chaotic elements which are determined by the recognized gesture in real time. The music conductor’s hand motions are detected by Microsoft Kinect in this system. Music theories are embedded in the algorithm and, as a result, generated music is richer. Furthermore, we constructed the music generation system and performed experiments for generating music composed by human beings.


Author(s):  
Valentijn Borghuis ◽  
Luca Angioloni ◽  
Lorenzo Brusci ◽  
Paolo Frasconi

We demonstrate a pattern-based MIDI music generation system with a generation strategy based on Wasserstein autoencoders and a novel variant of pianoroll descriptions of patterns which employs separate channels for note velocities and note durations and can be fed into classic DCGAN-style convolutional architectures. We trained the system on two new datasets (in the acid-jazz and high-pop genres) composed by musicians in our team with music generation in mind. Our demonstration shows that moving smoothly in the latent space allows us to generate meaningful sequences of four-bars patterns.


Sign in / Sign up

Export Citation Format

Share Document