scholarly journals A Generative Model for Correlated Graph Signals

Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3078
Author(s):  
Pavel Loskot

A graph signal is a random vector with a partially known statistical description. The observations are usually sufficient to determine marginal distributions of graph node variables and their pairwise correlations representing the graph edges. However, the curse of dimensionality often prevents estimating a full joint distribution of all variables from the available observations. This paper introduces a computationally effective generative model to sample from arbitrary but known marginal distributions with defined pairwise correlations. Numerical experiments show that the proposed generative model is generally accurate for correlation coefficients with magnitudes up to about 0.3, whilst larger correlations can be obtained at the cost of distribution approximation accuracy. The generative models of graph signals can also be used to sample multivariate distributions for which closed-form mathematical expressions are not known or are too complex.

Author(s):  
Masoumeh Zareapoor ◽  
Jie Yang

Image-to-Image translation aims to learn an image from a source domain to a target domain. However, there are three main challenges, such as lack of paired datasets, multimodality, and diversity, that are associated with these problems and need to be dealt with. Convolutional neural networks (CNNs), despite of having great performance in many computer vision tasks, they fail to detect the hierarchy of spatial relationships between different parts of an object and thus do not form the ideal representative model we look for. This article presents a new variation of generative models that aims to remedy this problem. We use a trainable transformer, which explicitly allows the spatial manipulation of data within training. This differentiable module can be augmented into the convolutional layers in the generative model, and it allows to freely alter the generated distributions for image-to-image translation. To reap the benefits of proposed module into generative model, our architecture incorporates a new loss function to facilitate an effective end-to-end generative learning for image-to-image translation. The proposed model is evaluated through comprehensive experiments on image synthesizing and image-to-image translation, along with comparisons with several state-of-the-art algorithms.


2013 ◽  
Vol 4 (2) ◽  
pp. 110-117
Author(s):  
Dennis Collentine ◽  
Holger Johnsson

Current international agreements call for a significant reduction of nitrogen loads to the Baltic Sea. New measures to reduce nitrogen loads from the agricultural sector and an increased focus on cost efficiency will be needed to meet reduction targets. For policy design and evaluation it is important to understand the impact of weather on the efficiency of abatement measures. One new proposed policy is the use of crop permits based on weather normalized average leaching. This paper describes the use of the Spearman method to determine the efficiency of this policy with annual weather variation. The conclusion is that the values of the Spearman correlation coefficients in the study indicate that using average leaching for the individual crops on specific soil types for calculating crop permit requirements is an efficient policy. The Spearman method is demonstrated to be a simple useful tool for evaluating the impact of weather and is recommended for use in new studies.


2019 ◽  
Vol 2019 (4) ◽  
pp. 232-249 ◽  
Author(s):  
Benjamin Hilprecht ◽  
Martin Härterich ◽  
Daniel Bernau

Abstract We present two information leakage attacks that outperform previous work on membership inference against generative models. The first attack allows membership inference without assumptions on the type of the generative model. Contrary to previous evaluation metrics for generative models, like Kernel Density Estimation, it only considers samples of the model which are close to training data records. The second attack specifically targets Variational Autoencoders, achieving high membership inference accuracy. Furthermore, previous work mostly considers membership inference adversaries who perform single record membership inference. We argue for considering regulatory actors who perform set membership inference to identify the use of specific datasets for training. The attacks are evaluated on two generative model architectures, Generative Adversarial Networks (GANs) and Variational Autoen-coders (VAEs), trained on standard image datasets. Our results show that the two attacks yield success rates superior to previous work on most data sets while at the same time having only very mild assumptions. We envision the two attacks in combination with the membership inference attack type formalization as especially useful. For example, to enforce data privacy standards and automatically assessing model quality in machine learning as a service setups. In practice, our work motivates the use of GANs since they prove less vulnerable against information leakage attacks while producing detailed samples.


2020 ◽  
Vol 34 (10) ◽  
pp. 13869-13870
Author(s):  
Yijing Liu ◽  
Shuyu Lin ◽  
Ronald Clark

Variational autoencoders (VAEs) have been a successful approach to learning meaningful representations of data in an unsupervised manner. However, suboptimal representations are often learned because the approximate inference model fails to match the true posterior of the generative model, i.e. an inconsistency exists between the learnt inference and generative models. In this paper, we introduce a novel consistency loss that directly requires the encoding of the reconstructed data point to match the encoding of the original data, leading to better representations. Through experiments on MNIST and Fashion MNIST, we demonstrate the existence of the inconsistency in VAE learning and that our method can effectively reduce such inconsistency.


2019 ◽  
Vol 294 ◽  
pp. 01013 ◽  
Author(s):  
Mykola Karnaukh ◽  
Dmitriy Muzylyov ◽  
Natalya Shramenko

The paper discusses an actual scientific and practical problem of expanding the fuel base of the transport means by using biodiesel fuel in the form of ethyl esters made of rapeseed, sunflower and soybean oils. The choice of the optimal blend composition of diesel and biodiesel in appropriate operating conditions helps to provide the energy independence of transport companies on mineral hydrocarbons, reduce the anthropogenic influence on the environment and improve the environmental safety of transport. The research offers a new technological model for the production of biodiesel, which improves the quality and reduces the cost of biodiesel, reduces its negative impact on the elements of the vehicle fuel system. Reliability of fuel system elements is calculated. Mathematical expressions were obtained to determine the probability of failure-free operation of the fuel system and the probability of failure of its elements during operation at various fuel mixes. The assessment of the economic efficiency of the use of biodiesel as a fuel for vehicles was made.


2020 ◽  
Vol 34 (04) ◽  
pp. 3397-3404 ◽  
Author(s):  
Oishik Chatterjee ◽  
Ganesh Ramakrishnan ◽  
Sunita Sarawagi

Scarcity of labeled data is a bottleneck for supervised learning models. A paradigm that has evolved for dealing with this problem is data programming. An existing data programming paradigm allows human supervision to be provided as a set of discrete labeling functions (LF) that output possibly noisy labels to input instances and a generative model for consolidating the weak labels. We enhance and generalize this paradigm by supporting functions that output a continuous score (instead of a hard label) that noisily correlates with labels. We show across five applications that continuous LFs are more natural to program and lead to improved recall. We also show that accuracy of existing generative models is unstable with respect to initialization, training epochs, and learning rates. We give control to the data programmer to guide the training process by providing intuitive quality guides with each LF. We propose an elegant method of incorporating these guides into the generative model. Our overall method, called CAGE, makes the data programming paradigm more reliable than other tricks based on initialization, sign-penalties, or soft-accuracy constraints.


2021 ◽  
Vol 118 (16) ◽  
pp. e2020324118
Author(s):  
Biwei Dai ◽  
Uroš Seljak

The goal of generative models is to learn the intricate relations between the data to create new simulated data, but current approaches fail in very high dimensions. When the true data-generating process is based on physical processes, these impose symmetries and constraints, and the generative model can be created by learning an effective description of the underlying physics, which enables scaling of the generative model to very high dimensions. In this work, we propose Lagrangian deep learning (LDL) for this purpose, applying it to learn outputs of cosmological hydrodynamical simulations. The model uses layers of Lagrangian displacements of particles describing the observables to learn the effective physical laws. The displacements are modeled as the gradient of an effective potential, which explicitly satisfies the translational and rotational invariance. The total number of learned parameters is only of order 10, and they can be viewed as effective theory parameters. We combine N-body solver fast particle mesh (FastPM) with LDL and apply it to a wide range of cosmological outputs, from the dark matter to the stellar maps, gas density, and temperature. The computational cost of LDL is nearly four orders of magnitude lower than that of the full hydrodynamical simulations, yet it outperforms them at the same resolution. We achieve this with only of order 10 layers from the initial conditions to the final output, in contrast to typical cosmological simulations with thousands of time steps. This opens up the possibility of analyzing cosmological observations entirely within this framework, without the need for large dark-matter simulations.


2018 ◽  
Author(s):  
Jan H. Jensen

This paper presents a comparison of a graph-based genetic algorithm (GB-GA) and machine learning (ML) results for the optimisation of logP values with a constraint for synthetic accessibility and shows that GA is as good or better than the ML approaches for this particular property. The molecules found by GB-GA bear little resemblance to the molecules used to construct the initial mating pool, indicating that the GB-GA approach can traverse a relatively large distance in chemical space using relatively few (50) generations. The paper also introduces a new non-ML graph-based generative model (GB-GM) that can be parameterized using very small data sets and combined with a Monte Carlo tree search (MCTS) algorithm. The results are comparable to previously published results (Sci. Technol. Adv. Mater. 2017, 18, 972-976) using a recurrent neural network (RNN) generative model, while the GB-GM-based method is orders of magnitude faster. The MCTS results seem more dependent on the composition of the training set than the GA approach for this particular property. Our results suggest that the performance of new ML-based generative models should be compared to more traditional, and often simpler, approaches such a GA.


2021 ◽  
Author(s):  
Henning Tiedemann ◽  
Yaniv Morgenstern ◽  
Filipp Schmidt ◽  
Roland W. Fleming

Humans have the striking ability to learn and generalize new visual concepts from just a single exemplar. We suggest that when presented with a novel object, observers identify its significant features and infer a generative model of its shape, allowing them to mentally synthesize plausible variants. To test this, we showed participants abstract 2D shapes ("Exemplars") and asked them to draw new objects ("Variations") belonging to the same class. We show that this procedure created genuine novel categories. In line with our hypothesis, particular features of each Exemplar were preserved in its Variations and there was striking agreement between participants about which shape features were most distinctive. Also, we show that strategies to create Variations were strongly driven by part structure: new objects typically modified individual parts (e.g., limbs) of the Exemplar, often preserving part order, sometimes altering it. Together, our findings suggest that sophisticated internal generative models are key to how humans analyze and generalize from single exemplars.


2018 ◽  
Vol 8 (1) ◽  
pp. 10
Author(s):  
Babajide Saheed Kosemani ◽  
A. Isaac Bamgboye

The economic analysis of input energy in cassava production was considered in this study. Farms were surveyed to collect data on fuel, natural gas, fertilizer, pesticides and chemicals used on the farm for cassava production. The areas of study were Oyo, Ogun, Osun and Kwara States of Nigeria. The data for cost input resources in all the selected farms during cassava production from land preparation to transportation to market or house was obtained using structured questionnaire and oral interviews. Mathematical expressions were developed to evaluate cost analysis for each of the defined unit operations and the cost incurred were then determined. The total cost of production of one hectare of cassava was N82,055 and cost analysis revealed that profit of production of one hectare of cassava was N123,745. Benefit cost ratio was 2.50, which was greater than 1.0, indicating that cassava production is feasible from the economic stand point.


Sign in / Sign up

Export Citation Format

Share Document