Complex-Valued Variational Autoencoder: A Novel Deep Generative Model for Direct Representation of Complex Spectra

Author(s):  
Toru Nakashika
2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Osman Mamun ◽  
Madison Wenzlick ◽  
Arun Sathanur ◽  
Jeffrey Hawk ◽  
Ram Devanathan

AbstractThe Larson–Miller parameter (LMP) offers an efficient and fast scheme to estimate the creep rupture life of alloy materials for high-temperature applications; however, poor generalizability and dependence on the constant C often result in sub-optimal performance. In this work, we show that the direct rupture life parameterization without intermediate LMP parameterization, using a gradient boosting algorithm, can be used to train ML models for very accurate prediction of rupture life in a variety of alloys (Pearson correlation coefficient >0.9 for 9–12% Cr and >0.8 for austenitic stainless steels). In addition, the Shapley value was used to quantify feature importance, making the model interpretable by identifying the effect of various features on the model performance. Finally, a variational autoencoder-based generative model was built by conditioning on the experimental dataset to sample hypothetical synthetic candidate alloys from the learnt joint distribution not existing in both 9–12% Cr ferritic–martensitic alloys and austenitic stainless steel datasets.


Author(s):  
Nan Cao ◽  
Xin Yan ◽  
Yang Shi ◽  
Chaoran Chen

Sketch drawings play an important role in assisting humans in communication and creative design since ancient period. This situation has motivated the development of artificial intelligence (AI) techniques for automatically generating sketches based on user input. Sketch-RNN, a sequence-to-sequence variational autoencoder (VAE) model, was developed for this purpose and known as a state-of-the-art technique. However, it suffers from limitations, including the generation of lowquality results and its incapability to support multi-class generations. To address these issues, we introduced AI-Sketcher, a deep generative model for generating high-quality multiclass sketches. Our model improves drawing quality by employing a CNN-based autoencoder to capture the positional information of each stroke at the pixel level. It also introduces an influence layer to more precisely guide the generation of each stroke by directly referring to the training data. To support multi-class sketch generation, we provided a conditional vector that can help differentiate sketches under various classes. The proposed technique was evaluated based on two large-scale sketch datasets, and results demonstrated its power in generating high-quality sketches.


Author(s):  
Dou Huang ◽  
Xuan Song ◽  
Zipei Fan ◽  
Renhe Jiang ◽  
Ryosuke Shibasaki ◽  
...  

2019 ◽  
Vol 31 (9) ◽  
pp. 1891-1914 ◽  
Author(s):  
Hirokazu Kameoka ◽  
Li Li ◽  
Shota Inoue ◽  
Shoji Makino

This letter proposes a multichannel source separation technique, the multichannel variational autoencoder (MVAE) method, which uses a conditional VAE (CVAE) to model and estimate the power spectrograms of the sources in a mixture. By training the CVAE using the spectrograms of training examples with source-class labels, we can use the trained decoder distribution as a universal generative model capable of generating spectrograms conditioned on a specified class index. By treating the latent space variables and the class index as the unknown parameters of this generative model, we can develop a convergence-guaranteed algorithm for supervised determined source separation that consists of iteratively estimating the power spectrograms of the underlying sources, as well as the separation matrices. In experimental evaluations, our MVAE produced better separation performance than a baseline method.


2019 ◽  
Vol 31 (12) ◽  
pp. 2348-2367
Author(s):  
Tian Han ◽  
Xianglei Xing ◽  
Jiawen Wu ◽  
Ying Nian Wu

A recent Cell paper (Chang & Tsao, 2017 ) reports an interesting discovery. For the face stimuli generated by a pretrained active appearance model (AAM), the responses of neurons in the areas of the primate brain that are responsible for face recognition exhibit a strong linear relationship with the shape variables and appearance variables of the AAM that generates the face stimuli. In this letter, we show that this behavior can be replicated by a deep generative model, the generator network, that assumes that the observed signals are generated by latent random variables via a top-down convolutional neural network. Specifically, we learn the generator network from the face images generated by a pretrained AAM model using a variational autoencoder, and we show that the inferred latent variables of the learned generator network have a strong linear relationship with the shape and appearance variables of the AAM model that generates the face images. Unlike the AAM model, which has an explicit shape model where the shape variables generate the control points or landmarks, the generator network has no such shape model and shape variables. Yet it can learn the shape knowledge in the sense that some of the latent variables of the learned generator network capture the shape variations in the face images generated by AAM.


ACS Omega ◽  
2020 ◽  
Vol 5 (30) ◽  
pp. 18642-18650 ◽  
Author(s):  
Sunghoon Joo ◽  
Min Soo Kim ◽  
Jaeho Yang ◽  
Jeahyun Park

2018 ◽  
Vol 8 (1) ◽  
pp. 8-12 ◽  
Author(s):  
Yuichiro Motegi ◽  
◽  
Yuma Hijioka ◽  
Makoto Murakami

Sign in / Sign up

Export Citation Format

Share Document