scholarly journals Deep Composition of Tensor-Trains Using Squared Inverse Rosenblatt Transports

Author(s):  
Tiangang Cui ◽  
Sergey Dolgov

AbstractCharacterising intractable high-dimensional random variables is one of the fundamental challenges in stochastic computation. The recent surge of transport maps offers a mathematical foundation and new insights for tackling this challenge by coupling intractable random variables with tractable reference random variables. This paper generalises the functional tensor-train approximation of the inverse Rosenblatt transport recently developed by Dolgov et al. (Stat Comput 30:603–625, 2020) to a wide class of high-dimensional non-negative functions, such as unnormalised probability density functions. First, we extend the inverse Rosenblatt transform to enable the transport to general reference measures other than the uniform measure. We develop an efficient procedure to compute this transport from a squared tensor-train decomposition which preserves the monotonicity. More crucially, we integrate the proposed order-preserving functional tensor-train transport into a nested variable transformation framework inspired by the layered structure of deep neural networks. The resulting deep inverse Rosenblatt transport significantly expands the capability of tensor approximations and transport maps to random variables with complicated nonlinear interactions and concentrated density functions. We demonstrate the efficiency of the proposed approach on a range of applications in statistical learning and uncertainty quantification, including parameter estimation for dynamical systems and inverse problems constrained by partial differential equations.

1987 ◽  
Vol 19 (3) ◽  
pp. 632-651 ◽  
Author(s):  
Ushio Sumita ◽  
Yasushi Masuda

We consider a class of functions on [0,∞), denoted by Ω, having Laplace transforms with only negative zeros and poles. Of special interest is the class Ω+ of probability density functions in Ω. Simple and useful conditions are given for necessity and sufficiency of f ∊ Ω to be in Ω+. The class Ω+ contains many classes of great importance such as mixtures of n independent exponential random variables (CMn), sums of n independent exponential random variables (PF∗n), sums of two independent random variables, one in CMr and the other in PF∗1 (CMPFn with n = r + l) and sums of independent random variables in CMn(SCM). Characterization theorems for these classes are given in terms of zeros and poles of Laplace transforms. The prevalence of these classes in applied probability models of practical importance is demonstrated. In particular, sufficient conditions are given for complete monotonicity and unimodality of modified renewal densities.


2020 ◽  
Vol 32 (22) ◽  
pp. 17077-17095 ◽  
Author(s):  
Stephanie Earp ◽  
Andrew Curtis

Abstract Travel-time tomography for the velocity structure of a medium is a highly nonlinear and nonunique inverse problem. Monte Carlo methods are becoming increasingly common choices to provide probabilistic solutions to tomographic problems but those methods are computationally expensive. Neural networks can often be used to solve highly nonlinear problems at a much lower computational cost when multiple inversions are needed from similar data types. We present the first method to perform fully nonlinear, rapid and probabilistic Bayesian inversion of travel-time data for 2D velocity maps using a mixture density network. We compare multiple methods to estimate probability density functions that represent the tomographic solution, using different sets of prior information and different training methodologies. We demonstrate the importance of prior information in such high-dimensional inverse problems due to the curse of dimensionality: unrealistically informative prior probability distributions may result in better estimates of the mean velocity structure; however, the uncertainties represented in the posterior probability density functions then contain less information than is obtained when using a less informative prior. This is illustrated by the emergence of uncertainty loops in posterior standard deviation maps when inverting travel-time data using a less informative prior, which are not observed when using networks trained on prior information that includes (unrealistic) a priori smoothness constraints in the velocity models. We show that after an expensive program of network training, repeated high-dimensional, probabilistic tomography is possible on timescales of the order of a second on a standard desktop computer.


Author(s):  
M. D. Edge

This chapter considers the rules of probability. Probabilities are non-negative, they sum to one, and the probability that either of two mutually exclusive events occurs is the sum of the probability of the two events. Two events are said to be independent if the probability that they both occur is the product of the probabilities that each event occurs. Bayes’ theorem is used to update probabilities on the basis of new information, and it is shown that the conditional probabilities P(A|B) and P(B|A) are not the same. Finally, the chapter discusses ways in which distributions of random variables can be described, using probability mass functions for discrete random variables and probability density functions for continuous random variables.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Ryszard SNOPKOWSKI ◽  
Marta SUKIENNIK ◽  
Aneta NAPIERAJ

The article presents selected issues in the field of stochastic simulation of production process-es. Attention was drawn to the possibilityof including, in this type of models, the risk accompanying the implementation of processes. Probability density functions that can beused to characterize random variables present in the model are presented. The possibility of making mistakes while creat-ing this typeof models was pointed out. Two selected examples of the use of stochastic simulation in the analysis of production processes on theexample of the mining process are presented.


Sign in / Sign up

Export Citation Format

Share Document