scholarly journals Re-balancing Variational Autoencoder Loss for Molecule Sequence Generation

Author(s):  
Chaochao Yan ◽  
Sheng Wang ◽  
Jinyu Yang ◽  
Tingyang Xu ◽  
Junzhou Huang
2021 ◽  
Author(s):  
Wangli Hao ◽  
Meng Han ◽  
Shancang Li ◽  
Fuzhong Li

AbstractConventional motion predictions have achieved promising performance. However, the length of the predicted motion sequences of most literatures are short, and the rhythm of the generated pose sequence has rarely been explored. To pursue high quality, rhythmic, and long-term pose sequence prediction, this paper explores a novel dancing with the sound task, which is appealing and challenging in computer vision field. To tackle this problem, a novel model is proposed, which takes the sound as an indicator input and outputs the dancing pose sequence. Specifically, our model is based on the variational autoencoder (VAE) framework, which encodes the continuity and rhythm of the sound information into the hidden space to generate a coherent, diverse, rhythmic and long-term pose video. Extensive experiments validated the effectiveness of audio cues in the generation of dancing pose sequences. Concurrently, a novel dataset of audiovisual multimodal sequence generation has been released to promote the development of this field.


Sign in / Sign up

Export Citation Format

Share Document