sequential decomposition
Recently Published Documents


TOTAL DOCUMENTS

49
(FIVE YEARS 10)

H-INDEX

9
(FIVE YEARS 0)

2021 ◽  
Vol 82 (7) ◽  
Author(s):  
Linard Hoessly

AbstractWe examine reaction networks (CRNs) through their associated continuous-time Markov processes. Studying the dynamics of such networks is in general hard, both analytically and by simulation. In particular, stationary distributions of stochastic reaction networks are only known in some cases. We analyze class properties of the underlying continuous-time Markov chain of CRNs under the operation of join and examine conditions such that the form of the stationary distributions of a CRN is derived from the parts of the decomposed CRNs. The conditions can be easily checked in examples and allow recursive application. The theory developed enables sequential decomposition of the Markov processes and calculations of stationary distributions. Since the class of processes expressible through such networks is big and only few assumptions are made, the principle also applies to other stochastic models. We give examples of interest from CRN theory to highlight the decomposition.


Author(s):  
JIMMY ALEXANDER MELO MORENO

This paper examines the changes in real hourly wages in Colombia along the recovery phase taking place from March 2009 to March 2014. The starting finding is that the distribution of wages at trough looks like translations to the left of recovery distribution. To shed light on those procyclical translations this paper performs a sequential decomposition on change of wages in 1) cyclical demand and supply factors, 2) changes in the attributes of workers, 3) changes and spillovers effects of minimum wage and 4) a residual. As literature suggests, this paper shows that procyclicality of real wages is associated mainly to shifts in labor demand-supply factors and to updates of skills --usually linked to the secular trend of wage distribution. As a novelty, evidence suggests that there is a positive spillover from monthly Minimum Wage on hourly wages explaining 25% of the divergence between distributions.


2020 ◽  
Author(s):  
Deepanshu Vasal ◽  
Rajesh K Mishra ◽  
Sriram Vishwanath

Proceedings ◽  
2019 ◽  
Vol 33 (1) ◽  
pp. 18
Author(s):  
Scott Cameron ◽  
Hans Eggers ◽  
Steve Kroon

Existing algorithms like nested sampling and annealed importance sampling are able to produce accurate estimates of the marginal likelihood of a model, but tend to scale poorly to large data sets. This is because these algorithms need to recalculate the log-likelihood for each iteration by summing over the whole data set. Efficient scaling to large data sets requires that algorithms only visit small subsets (mini-batches) of data on each iteration. To this end, we estimate the marginal likelihood via a sequential decomposition into a product of predictive distributions p ( y n | y < n ) . Predictive distributions can be approximated efficiently through Bayesian updating using stochastic gradient Hamiltonian Monte Carlo, which approximates likelihood gradients using mini-batches. Since each data point typically contains little information compared to the whole data set, the convergence to each successive posterior only requires a short burn-in phase. This approach can be viewed as a special case of sequential Monte Carlo (SMC) with a single particle, but differs from typical SMC methods in that it uses stochastic gradients. We illustrate how this approach scales favourably to large data sets with some simple models.


Sign in / Sign up

Export Citation Format

Share Document