scholarly journals Quantum-assisted associative adversarial network: applying quantum annealing in deep learning

2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Max Wilson ◽  
Thomas Vandal ◽  
Tad Hogg ◽  
Eleanor G. Rieffel

AbstractGenerative models have the capacity to model and generate new examples from a dataset and have an increasingly diverse set of applications driven by commercial and academic interest. In this work, we present an algorithm for learning a latent variable generative model via generative adversarial learning where the canonical uniform noise input is replaced by samples from a graphical model. This graphical model is learned by a Boltzmann machine which learns low-dimensional feature representation of data extracted by the discriminator. A quantum processor can be used to sample from the model to train the Boltzmann machine. This novel hybrid quantum-classical algorithm joins a growing family of algorithms that use a quantum processor sampling subroutine in deep learning, and provides a scalable framework to test the advantages of quantum-assisted learning. For the latent space model, fully connected, symmetric bipartite and Chimera graph topologies are compared on a reduced stochastically binarized MNIST dataset, for both classical and quantum sampling methods. The quantum-assisted associative adversarial network successfully learns a generative model of the MNIST dataset for all topologies. Evaluated using the Fréchet inception distance and inception score, the quantum and classical versions of the algorithm are found to have equivalent performance for learning an implicit generative model of the MNIST dataset. Classical sampling is used to demonstrate the algorithm on the LSUN bedrooms dataset, indicating scalability to larger and color datasets. Though the quantum processor used here is a quantum annealer, the algorithm is general enough such that any quantum processor, such as gate model quantum computers, may be substituted as a sampler.

Author(s):  
Masoumeh Zareapoor ◽  
Jie Yang

Image-to-Image translation aims to learn an image from a source domain to a target domain. However, there are three main challenges, such as lack of paired datasets, multimodality, and diversity, that are associated with these problems and need to be dealt with. Convolutional neural networks (CNNs), despite of having great performance in many computer vision tasks, they fail to detect the hierarchy of spatial relationships between different parts of an object and thus do not form the ideal representative model we look for. This article presents a new variation of generative models that aims to remedy this problem. We use a trainable transformer, which explicitly allows the spatial manipulation of data within training. This differentiable module can be augmented into the convolutional layers in the generative model, and it allows to freely alter the generated distributions for image-to-image translation. To reap the benefits of proposed module into generative model, our architecture incorporates a new loss function to facilitate an effective end-to-end generative learning for image-to-image translation. The proposed model is evaluated through comprehensive experiments on image synthesizing and image-to-image translation, along with comparisons with several state-of-the-art algorithms.


2021 ◽  
Vol 118 (16) ◽  
pp. e2020324118
Author(s):  
Biwei Dai ◽  
Uroš Seljak

The goal of generative models is to learn the intricate relations between the data to create new simulated data, but current approaches fail in very high dimensions. When the true data-generating process is based on physical processes, these impose symmetries and constraints, and the generative model can be created by learning an effective description of the underlying physics, which enables scaling of the generative model to very high dimensions. In this work, we propose Lagrangian deep learning (LDL) for this purpose, applying it to learn outputs of cosmological hydrodynamical simulations. The model uses layers of Lagrangian displacements of particles describing the observables to learn the effective physical laws. The displacements are modeled as the gradient of an effective potential, which explicitly satisfies the translational and rotational invariance. The total number of learned parameters is only of order 10, and they can be viewed as effective theory parameters. We combine N-body solver fast particle mesh (FastPM) with LDL and apply it to a wide range of cosmological outputs, from the dark matter to the stellar maps, gas density, and temperature. The computational cost of LDL is nearly four orders of magnitude lower than that of the full hydrodynamical simulations, yet it outperforms them at the same resolution. We achieve this with only of order 10 layers from the initial conditions to the final output, in contrast to typical cosmological simulations with thousands of time steps. This opens up the possibility of analyzing cosmological observations entirely within this framework, without the need for large dark-matter simulations.


Algorithms ◽  
2020 ◽  
Vol 13 (12) ◽  
pp. 319
Author(s):  
Wang Xi ◽  
Guillaume Devineau ◽  
Fabien Moutarde ◽  
Jie Yang

Generative models for images, audio, text, and other low-dimension data have achieved great success in recent years. Generating artificial human movements can also be useful for many applications, including improvement of data augmentation methods for human gesture recognition. The objective of this research is to develop a generative model for skeletal human movement, allowing to control the action type of generated motion while keeping the authenticity of the result and the natural style variability of gesture execution. We propose to use a conditional Deep Convolutional Generative Adversarial Network (DC-GAN) applied to pseudo-images representing skeletal pose sequences using tree structure skeleton image format. We evaluate our approach on the 3D skeletal data provided in the large NTU_RGB+D public dataset. Our generative model can output qualitatively correct skeletal human movements for any of the 60 action classes. We also quantitatively evaluate the performance of our model by computing Fréchet inception distances, which shows strong correlation to human judgement. To the best of our knowledge, our work is the first successful class-conditioned generative model for human skeletal motions based on pseudo-image representation of skeletal pose sequences.


Mathematics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 387
Author(s):  
Shuyu Li ◽  
Yunsick Sung

Deep learning has made significant progress in the field of automatic music generation. At present, the research on music generation via deep learning can be divided into two categories: predictive models and generative models. However, both categories have the same problems that need to be resolved. First, the length of the music must be determined artificially prior to generation. Second, although the convolutional neural network (CNN) is unexpectedly superior to the recurrent neural network (RNN), CNN still has several disadvantages. This paper proposes a conditional generative adversarial network approach using an inception model (INCO-GAN), which enables the generation of complete variable-length music automatically. By adding a time distribution layer that considers sequential data, CNN considers the time relationship in a manner similar to RNN. In addition, the inception model obtains richer features, which improves the quality of the generated music. In experiments conducted, the music generated by the proposed method and that by human composers were compared. High cosine similarity of up to 0.987 was achieved between the frequency vectors, indicating that the music generated by the proposed method is very similar to that created by a human composer.


Author(s):  
Wang Xi ◽  
Guillaume Devineau ◽  
Fabien Moutarde ◽  
Jie Yang

Generative models for images, audio, text and other low-dimension data have achieved great success in recent years. Generating artificial human movements can also be useful for many applications, including improvement of data augmentation methods for human gesture recognition. The object of this research is to develop a generative model for skeletal human movement, allowing to control the action type of generated motion while keeping the authenticity of the result and the natural style variability of gesture execution. We propose to use a conditional Deep Convolutional Generative Adversarial Network (DC-GAN) applied to pseudo-images representing skeletal pose sequences using Tree Structure Skeleton Image format. We evaluate our approach on the 3D-skeleton data provided in the large NTU RGB+D public dataset. Our generative model can output qualitatively correct skeletal human movements for any of its 60 action classes. We also quantitatively evaluate the performance of our model by computing Frechet Inception Distances, which shows strong correlation to human judgement. Up to our knowledge, our work is the first successful class-conditioned generative model for human skeletal motions based on pseudo-image representation of skeletal pose sequences.


2021 ◽  
Vol 7 ◽  
pp. e577
Author(s):  
Manuel Camargo ◽  
Marlon Dumas ◽  
Oscar González-Rojas

A generative model is a statistical model capable of generating new data instances from previously observed ones. In the context of business processes, a generative model creates new execution traces from a set of historical traces, also known as an event log. Two types of generative business process models have been developed in previous work: data-driven simulation models and deep learning models. Until now, these two approaches have evolved independently, and their relative performance has not been studied. This paper fills this gap by empirically comparing a data-driven simulation approach with multiple deep learning approaches for building generative business process models. The study sheds light on the relative strengths of these two approaches and raises the prospect of developing hybrid approaches that combine these strengths.


Geosciences ◽  
2018 ◽  
Vol 8 (11) ◽  
pp. 395 ◽  
Author(s):  
Daniel Buscombe ◽  
Paul Grams

We propose a probabilistic graphical model for discriminative substrate characterization, to support geological and biological habitat mapping in aquatic environments. The model, called a fully-connected conditional random field (CRF), is demonstrated using multispectral and monospectral acoustic backscatter from heterogeneous seafloors in Patricia Bay, British Columbia, and Bedford Basin, Nova Scotia. Unlike previously proposed discriminative algorithms, the CRF model considers both the relative backscatter magnitudes of different substrates and their relative proximities. The model therefore combines the statistical flexibility of a machine learning algorithm with an inherently spatial treatment of the substrate. The CRF model predicts substrates such that nearby locations with similar backscattering characteristics are likely to be in the same substrate class. The degree of allowable proximity and backscatter similarity are controlled by parameters that are learned from the data. CRF model results were evaluated against a popular generative model known as a Gaussian Mixture model (GMM) that doesn’t include spatial dependencies, only covariance between substrate backscattering response over different frequencies. Both models are used in conjunction with sparse bed observations/samples in a supervised classification. A detailed accuracy assessment, including a leave-one-out cross-validation analysis, was performed using both models. Using multispectral backscatter, the GMM model trained on 50% of the bed observations resulted in a 75% and 89% average accuracies in Patricia Bay and Bedford Basin, respectively. The same metrics for the CRF model were 78% and 95%. Further, the CRF model resulted in a 91% mean cross-validation accuracy across four substrate classes at Patricia Bay, and a 99.5% mean accuracy across three substrate classes at Bedford Basin, which suggest that the CRF model generalizes extremely well to new data. This analysis also showed that the CRF model was much less sensitive to the specific number and locations of bed observations than the generative model, owing to its ability to incorporate spatial autocorrelation in substrates. The CRF therefore may prove to be a powerful ‘spatially aware’ alternative to other discriminative classifiers.


Author(s):  
Daniel Buscombe ◽  
Paul Grams

We propose a probabilistic graphical model for discriminative substrate characterization, to support geological and biological habitat mapping in aquatic environments. The model, called a fully connected conditional random field (CRF), is demonstrated using multispectral and monospectral acoustic backscatter from heterogeneous seafloors in Patricia Bay, British Columbia, and Bedford Basin, Nova Scotia. Unlike previously proposed discriminative machine learning algorithms, the CRF model considers both the relative backscatter magnitudes of different substrates and their relative proximities. The model therefore combines the statistical flexibility of a machine learning algorithm with an inherently spatial treatment of the substrate. The CRF model predicts substrates such that nearby locations with similar backscattering characteristics are likely to be in the same substrate class. The degree of proximity and allowable backscatter similarity are controlled by parameters that are learned from the data. CRF model results were evaluated against a popular generative model known as a Gaussian Mixture model that doesn't include spatial dependencies, only covariance between substrate backscattering response over different frequencies. Both models are used in conjunction with sparse bed observations/samples in a supervised classification. A detailed accuracy assessment, including a leave-one-out cross-validation analysis, was performed using both models. Using multispectral backscatter, the GMM model trained on 50% of the bed observations resulted in a 75% and 89% average accuracies in Patricia Bay and Bedford Basin, respectively. The same metrics for the CRF model were 78% and 95%. Further, the CRF model resulted in a 91% mean cross-validation accuracy across four substrate classes at Patricia Bay, and a 99.5% mean accuracy across three substrate classes at Bedford Basin, which suggest that the CRF model generalizes extremely well to new data. This analysis also showed that the CRF model was much less sensitive to the specific number and locations of bed observations than the generative model, owing to its ability to incorporate spatial autocorrelation in substrates. The CRF approach therefore may prove to be a powerful `spatially aware' alternative to other discriminative classifiers.


Data Science ◽  
2021 ◽  
pp. 1-21
Author(s):  
Kushal Veer Singh ◽  
Ajay Kumar Verma ◽  
Lovekesh Vig

Capturing data in the form of networks is becoming an increasingly popular approach for modeling, analyzing and visualising complex phenomena, to understand the important properties of the underlying complex processes. Access to many large-scale network datasets is restricted due to the privacy and security concerns. Also for several applications (such as functional connectivity networks), generating large scale real data is expensive. For these reasons, there is a growing need for advanced mathematical and statistical models (also called generative models) that can account for the structure of these large-scale networks, without having to materialize them in the real world. The objective is to provide a comprehensible description of the network properties and to be able to infer previously unobserved properties. Various models have been developed by researchers, which generate synthetic networks that adhere to the structural properties of real networks. However, the selection of the appropriate generative model for a given real-world network remains an important challenge. In this paper, we investigate this problem and provide a novel technique (named as TripletFit) for model selection (or network classification) and estimation of structural similarities of the complex networks. The goal of network model selection is to select a generative model that is able to generate a structurally similar synthetic network for a given real-world (target) network. We consider six outstanding generative models as the candidate models. The existing model selection methods mostly suffer from sensitivity to network perturbations, dependency on the size of the networks, and low accuracy. To overcome these limitations, we considered a broad array of network features, with the aim of representing different structural aspects of the network and employed deep learning techniques such as deep triplet network architecture and simple feed-forward network for model selection and estimation of structural similarities of the complex networks. Our proposed method, outperforms existing methods with respect to accuracy, noise-tolerance, and size independence on a number of gold standard data set used in previous studies.


2020 ◽  
Vol 16 (6) ◽  
pp. 3721-3730 ◽  
Author(s):  
Xiaofeng Yuan ◽  
Jiao Zhou ◽  
Biao Huang ◽  
Yalin Wang ◽  
Chunhua Yang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document