finite state space
Recently Published Documents


TOTAL DOCUMENTS

162
(FIVE YEARS 17)

H-INDEX

19
(FIVE YEARS 0)

Mathematics ◽  
2022 ◽  
Vol 10 (2) ◽  
pp. 251
Author(s):  
Virginia Giorno ◽  
Amelia G. Nobile

We consider a time-inhomogeneous Markov chain with a finite state-space which models a system in which failures and repairs can occur at random time instants. The system starts from any state j (operating, F, R). Due to a failure, a transition from an operating state to F occurs after which a repair is required, so that a transition leads to the state R. Subsequently, there is a restore phase, after which the system restarts from one of the operating states. In particular, we assume that the intensity functions of failures, repairs and restores are proportional and that the birth-death process that models the system is a time-inhomogeneous Prendiville process.


Author(s):  
Alexander Aurell ◽  
René Carmona ◽  
Gökçe Dayanıklı ◽  
Mathieu Laurière

AbstractWe consider a game for a continuum of non-identical players evolving on a finite state space. Their heterogeneous interactions are represented with a graphon, which can be viewed as the limit of a dense random graph. A player’s transition rates between the states depend on their control and the strength of interaction with the other players. We develop a rigorous mathematical framework for the game and analyze Nash equilibria. We provide a sufficient condition for a Nash equilibrium and prove existence of solutions to a continuum of fully coupled forward-backward ordinary differential equations characterizing Nash equilibria. Moreover, we propose a numerical approach based on machine learning methods and we present experimental results on different applications to compartmental models in epidemiology.


Author(s):  
Gert de Cooman

AbstractI present a short and easy introduction to a number of basic definitions and important results from the theory of imprecise Markov chains in discrete time, with a finite state space. The approach is intuitive and graphical.


2021 ◽  
Vol 31 (4) ◽  
Author(s):  
Jonas Latz

AbstractStochastic gradient descent is an optimisation method that combines classical gradient descent with random subsampling within the target functional. In this work, we introduce the stochastic gradient process as a continuous-time representation of stochastic gradient descent. The stochastic gradient process is a dynamical system that is coupled with a continuous-time Markov process living on a finite state space. The dynamical system—a gradient flow—represents the gradient descent part, the process on the finite state space represents the random subsampling. Processes of this type are, for instance, used to model clonal populations in fluctuating environments. After introducing it, we study theoretical properties of the stochastic gradient process: We show that it converges weakly to the gradient flow with respect to the full target function, as the learning rate approaches zero. We give conditions under which the stochastic gradient process with constant learning rate is exponentially ergodic in the Wasserstein sense. Then we study the case, where the learning rate goes to zero sufficiently slowly and the single target functions are strongly convex. In this case, the process converges weakly to the point mass concentrated in the global minimum of the full target function; indicating consistency of the method. We conclude after a discussion of discretisation strategies for the stochastic gradient process and numerical experiments.


Author(s):  
Krzysztof Bartoszek ◽  
Wojciech Bartoszek ◽  
Michał Krzemiński

AbstractWe consider a random dynamical system, where the deterministic dynamics are driven by a finite-state space Markov chain. We provide a comprehensive introduction to the required mathematical apparatus and then turn to a special focus on the susceptible-infected-recovered epidemiological model with random steering. Through simulations we visualize the behaviour of the system and the effect of the high-frequency limit of the driving Markov chain. We formulate some questions and conjectures of a purely theoretical nature.


2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Jin-Won Kim ◽  
Amirhossein Taghvaei ◽  
Yongxin Chen ◽  
Prashant G. Mehta

<p style='text-indent:20px;'>The purpose of this paper is to describe the feedback particle filter algorithm for problems where there are a large number (<inline-formula><tex-math id="M1">\begin{document}$ M $\end{document}</tex-math></inline-formula>) of non-interacting agents (targets) with a large number (<inline-formula><tex-math id="M2">\begin{document}$ M $\end{document}</tex-math></inline-formula>) of non-agent specific observations (measurements) that originate from these agents. In its basic form, the problem is characterized by data association uncertainty whereby the association between the observations and agents must be deduced in addition to the agent state. In this paper, the large-<inline-formula><tex-math id="M3">\begin{document}$ M $\end{document}</tex-math></inline-formula> limit is interpreted as a problem of collective inference. This viewpoint is used to derive the equation for the empirical distribution of the hidden agent states. A feedback particle filter (FPF) algorithm for this problem is presented and illustrated via numerical simulations. Results are presented for the Euclidean and the finite state-space cases, both in continuous-time settings. The classical FPF algorithm is shown to be the special case (with <inline-formula><tex-math id="M4">\begin{document}$ M = 1 $\end{document}</tex-math></inline-formula>) of these more general results. The simulations help show that the algorithm well approximates the empirical distribution of the hidden states for large <inline-formula><tex-math id="M5">\begin{document}$ M $\end{document}</tex-math></inline-formula>.</p>


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Changwei Nie ◽  
Mi Chen ◽  
Haiyan Liu ◽  
Wenguang Yu

In this paper, a discrete Markov-modulated risk model with delayed claims, random premium income, and a constant dividend barrier is proposed. It is assumed that the random premium income and individual claims are affected by a Markov chain with finite state space. The model proposed is an extension of the discrete semi-Markov risk model with random premium income and delayed claims. Explicit expressions for the total expected discounted dividends until ruin are obtained by the method of generating function and the theory of difference equations. Finally, the effect of related parameters on the total expected discounted dividends are shown in several numerical examples.


Sign in / Sign up

Export Citation Format

Share Document