markov chain
Recently Published Documents


TOTAL DOCUMENTS

8990
(FIVE YEARS 1788)

H-INDEX

122
(FIVE YEARS 12)

2022 ◽  
Vol 41 (1) ◽  
pp. 1-15
Author(s):  
Thomas Bashford-Rogers ◽  
Ls Paulo Santos ◽  
Demetris Marnerides ◽  
Kurt Debattista

This article proposes a Markov Chain Monte Carlo ( MCMC ) rendering algorithm based on a family of guided transition kernels. The kernels exploit properties of ensembles of light transport paths, which are distributed according to the lighting in the scene, and utilize this information to make informed decisions for guiding local path sampling. Critically, our approach does not require caching distributions in world space, saving time and memory, yet it is able to make guided sampling decisions based on whole paths. We show how this can be implemented efficiently by organizing the paths in each ensemble and designing transition kernels for MCMC rendering based on a carefully chosen subset of paths from the ensemble. This algorithm is easy to parallelize and leads to improvements in variance when rendering a variety of scenes.


Mathematics ◽  
2022 ◽  
Vol 10 (2) ◽  
pp. 251
Author(s):  
Virginia Giorno ◽  
Amelia G. Nobile

We consider a time-inhomogeneous Markov chain with a finite state-space which models a system in which failures and repairs can occur at random time instants. The system starts from any state j (operating, F, R). Due to a failure, a transition from an operating state to F occurs after which a repair is required, so that a transition leads to the state R. Subsequently, there is a restore phase, after which the system restarts from one of the operating states. In particular, we assume that the intensity functions of failures, repairs and restores are proportional and that the birth-death process that models the system is a time-inhomogeneous Prendiville process.


2022 ◽  
pp. 1-47
Author(s):  
Amarjit Budhiraja ◽  
Nicolas Fraiman ◽  
Adam Waterbury

Abstract We consider a collection of Markov chains that model the evolution of multitype biological populations. The state space of the chains is the positive orthant, and the boundary of the orthant is the absorbing state for the Markov chain and represents the extinction states of different population types. We are interested in the long-term behavior of the Markov chain away from extinction, under a small noise scaling. Under this scaling, the trajectory of the Markov process over any compact interval converges in distribution to the solution of an ordinary differential equation (ODE) evolving in the positive orthant. We study the asymptotic behavior of the quasi-stationary distributions (QSD) in this scaling regime. Our main result shows that, under conditions, the limit points of the QSD are supported on the union of interior attractors of the flow determined by the ODE. We also give lower bounds on expected extinction times which scale exponentially with the system size. Results of this type when the deterministic dynamical system obtained under the scaling limit is given by a discrete-time evolution equation and the dynamics are essentially in a compact space (namely, the one-step map is a bounded function) have been studied by Faure and Schreiber (2014). Our results extend these to a setting of an unbounded state space and continuous-time dynamics. The proofs rely on uniform large deviation results for small noise stochastic dynamical systems and methods from the theory of continuous-time dynamical systems. In general, QSD for Markov chains with absorbing states and unbounded state spaces may not exist. We study one basic family of binomial-Poisson models in the positive orthant where one can use Lyapunov function methods to establish existence of QSD and also to argue the tightness of the QSD of the scaled sequence of Markov chains. The results from the first part are then used to characterize the support of limit points of this sequence of QSD.


2022 ◽  
Vol 9 ◽  
Author(s):  
Hanqing Zhao ◽  
Marija Vucelja

We introduce an efficient nonreversible Markov chain Monte Carlo algorithm to generate self-avoiding walks with a variable endpoint. In two dimensions, the new algorithm slightly outperforms the two-move nonreversible Berretti-Sokal algorithm introduced by H. Hu, X. Chen, and Y. Deng, while for three-dimensional walks, it is 3–5 times faster. The new algorithm introduces nonreversible Markov chains that obey global balance and allow for three types of elementary moves on the existing self-avoiding walk: shorten, extend or alter conformation without changing the length of the walk.


2022 ◽  
Vol 80 (1) ◽  
Author(s):  
Mustafa Al-Zoughool ◽  
Tamer Oraby ◽  
Harri Vainio ◽  
Janvier Gasana ◽  
Joseph Longenecker ◽  
...  

Abstract Background Kuwait had its first COVID-19 in late February, and until October 6, 2020 it recorded 108,268 cases and 632 deaths. Despite implementing one of the strictest control measures-including a three-week complete lockdown, there was no sign of a declining epidemic curve. The objective of the current analyses is to determine, hypothetically, the optimal timing and duration of a full lockdown in Kuwait that would result in controlling new infections and lead to a substantial reduction in case hospitalizations. Methods The analysis was conducted using a stochastic Continuous-Time Markov Chain (CTMC), eight state model that depicts the disease transmission and spread of SARS-CoV 2. Transmission of infection occurs between individuals through social contacts at home, in schools, at work, and during other communal activities. Results The model shows that a lockdown 10 days before the epidemic peak for 90 days is optimal but a more realistic duration of 45 days can achieve about a 45% reduction in both new infections and case hospitalizations. Conclusions In the view of the forthcoming waves of the COVID19 pandemic anticipated in Kuwait using a correctly-timed and sufficiently long lockdown represents a workable management strategy that encompasses the most stringent form of social distancing with the ability to significantly reduce transmissions and hospitalizations.


2022 ◽  
Author(s):  
Saumik Dana

The critical slip distance in rate and state model for fault friction in the study of potential earthquakes can vary wildly from micrometers to few meters depending on the length scale of the critically stressed fault. This makes it incredibly important to construct an inversion framework that provides good estimates of the critical slip distance purely based on the observed acceleration at the seismogram. The framework is based on Bayesian inference and Markov chain Monte Carlo. The synthetic data is generated by adding noise to the acceleration output of spring-slider-damper idealization of the rate and state model as the forward model.


Sign in / Sign up

Export Citation Format

Share Document