scholarly journals Computationally Efficient Estimation of the Spectral Gap of a Markov Chain

Author(s):  
Richard Combes ◽  
Mikael Touati
2019 ◽  
Vol 2019 ◽  
pp. 1-14
Author(s):  
S. Y. Park ◽  
C. Li ◽  
S. M. Mendoza Benavides ◽  
E. van Heugten ◽  
A. M. Staicu

We propose a novel modeling framework to study the effect of covariates of various types on the conditional distribution of the response. The methodology accommodates flexible model structure, allows for joint estimation of the quantiles at all levels, and provides a computationally efficient estimation algorithm. Extensive numerical investigation confirms good performance of the proposed method. The methodology is motivated by and applied to a lactating sow study, where the primary interest is to understand how the dynamic change of minute-by-minute temperature in the farrowing rooms within a day (functional covariate) is associated with low quantiles of feed intake of lactating sows, while accounting for other sow-specific information (vector covariate).


Author(s):  
Vassilios Stathopoulos ◽  
Mark A. Girolami

Bayesian analysis for Markov jump processes (MJPs) is a non-trivial and challenging problem. Although exact inference is theoretically possible, it is computationally demanding, thus its applicability is limited to a small class of problems. In this paper, we describe the application of Riemann manifold Markov chain Monte Carlo (MCMC) methods using an approximation to the likelihood of the MJP that is valid when the system modelled is near its thermodynamic limit. The proposed approach is both statistically and computationally efficient whereas the convergence rate and mixing of the chains allow for fast MCMC inference. The methodology is evaluated using numerical simulations on two problems from chemical kinetics and one from systems biology.


Author(s):  
Javad Sovizi ◽  
Suren Kumar ◽  
Venkat Krovi

Abstract We present a computationally efficient approach for the intra-operative update of the feedback control policy for the steerable needle in the presence of the motion uncertainty. The solution to dynamic programming (DP) equations, to obtain the optimal control policy, is difficult or intractable for nonlinear problems such as steering flexible needle in soft tissue. We use the method of approximating Markov chain to approximate the continuous (and controlled) process with its discrete and locally consistent counterpart. This provides the ground to examine the linear programming (LP) approach to solve the imposed DP problem that significantly reduces the computational demand. A concrete example of the two-dimensional (2D) needle steering is considered to investigate the effectiveness of the LP method for both deterministic and stochastic systems. We compare the performance of the LP-based policy with the results obtained through more computationally demanding algorithm, iterative policy space approximation. Finally, the reliability of the LP-based policy dealing with motion and parametric uncertainties as well as the effect of insertion point/angle on the probability of success is investigated.


Sign in / Sign up

Export Citation Format

Share Document