scholarly journals Stopping Probabilities for Patterns in Markov Chains

2014 ◽  
Vol 51 (1) ◽  
pp. 287-292 ◽  
Author(s):  
Renato Jacob Gava ◽  
Danilo Salotti

Consider a sequence of Markov-dependent trials where each trial produces a letter of a finite alphabet. Given a collection of patterns, we look at this sequence until one of these patterns appears as a run. We show how the method of gambling teams can be employed to compute the probability that a given pattern is the first pattern to occur.

2014 ◽  
Vol 51 (01) ◽  
pp. 287-292 ◽  
Author(s):  
Renato Jacob Gava ◽  
Danilo Salotti

Consider a sequence of Markov-dependent trials where each trial produces a letter of a finite alphabet. Given a collection of patterns, we look at this sequence until one of these patterns appears as a run. We show how the method of gambling teams can be employed to compute the probability that a given pattern is the first pattern to occur.


1988 ◽  
Vol 25 (1) ◽  
pp. 106-119 ◽  
Author(s):  
Richard Arratia ◽  
Pricilla Morris ◽  
Michael S. Waterman

A derivation of a law of large numbers for the highest-scoring matching subsequence is given. Let Xk, Yk be i.i.d. q=(q(i))i∊S letters from a finite alphabet S and v=(v(i))i∊S be a sequence of non-negative real numbers assigned to the letters of S. Using a scoring system similar to that of the game Scrabble, the score of a word w=i1 · ·· im is defined to be V(w)=v(i1) + · ·· + v(im). Let Vn denote the value of the highest-scoring matching contiguous subsequence between X1X2 · ·· Xn and Y1Y2· ·· Yn. In this paper, we show that Vn/K log(n) → 1 a.s. where K ≡ K(q,v). The method employed here involves ‘stuttering’ the letters to construct a Markov chain and applying previous results for the length of the longest matching subsequence. An explicit form for β ∊Pr(S), where β (i) denotes the proportion of letter i found in the highest-scoring word, is given. A similar treatment for Markov chains is also included.Implicit in these results is a large-deviation result for the additive functional, H ≡ Σn < τv(Xn), for a Markov chain stopped at the hitting time τ of some state. We give this large deviation result explicitly, for Markov chains in discrete time and in continuous time.


2002 ◽  
Vol 39 (2) ◽  
pp. 271-281 ◽  
Author(s):  
Shoou-Ren Hsiau ◽  
Jiing-Ru Yang

In a sequence of Markov-dependent trials, the optimal strategy which maximizes the probability of stopping on the last success is considered. Both homogeneous Markov chains and nonhomogeneous Markov chains are studied. For the homogeneous case, the analysis is divided into two parts and both parts are realized completely. For the nonhomogeneous case, we prove a result which contains the result of Bruss (2000) under an independence structure.


2002 ◽  
Vol 39 (02) ◽  
pp. 271-281 ◽  
Author(s):  
Shoou-Ren Hsiau ◽  
Jiing-Ru Yang

In a sequence of Markov-dependent trials, the optimal strategy which maximizes the probability of stopping on the last success is considered. Both homogeneous Markov chains and nonhomogeneous Markov chains are studied. For the homogeneous case, the analysis is divided into two parts and both parts are realized completely. For the nonhomogeneous case, we prove a result which contains the result of Bruss (2000) under an independence structure.


2001 ◽  
Vol 38 (A) ◽  
pp. 66-77 ◽  
Author(s):  
Albrecht Irle ◽  
Joseph Gani

This paper considers the occurrence of patterns in sequences of independent trials from a finite alphabet; Gani and Irle (1999) have described a finite state automaton which identifies exactly those sequences of symbols containing the specific pattern, which may be thought of as the word of interest. Each word generates a particular Markov chain. Motivated by a result of Guibas and Odlyzko (1981) on stochastic monotonicity for the random times when a particular word is completed for the first time, a new level-crossing ordering is introduced for stochastic processes. A process {Yn : n = 0, 1, …} is slower in level-crossing than a process {Zn}, if it takes {Yn} stochastically longer than {Zn} to exceed any given level. This relation is shown to be useful for the comparison of stochastic automata, and is used to investigate this ordering for Markov chains in discrete time.


1988 ◽  
Vol 25 (01) ◽  
pp. 106-119
Author(s):  
Richard Arratia ◽  
Pricilla Morris ◽  
Michael S. Waterman

A derivation of a law of large numbers for the highest-scoring matching subsequence is given. Let Xk, Yk be i.i.d. q=(q(i)) i∊S letters from a finite alphabet S and v=(v(i)) i∊S be a sequence of non-negative real numbers assigned to the letters of S. Using a scoring system similar to that of the game Scrabble, the score of a word w=i 1 · ·· im is defined to be V(w)=v(i 1) + · ·· + v(im ). Let Vn denote the value of the highest-scoring matching contiguous subsequence between X 1 X 2 · ·· Xn and Y 1 Y 2 · ·· Yn. In this paper, we show that Vn/K log(n) → 1 a.s. where K ≡ K(q , v). The method employed here involves ‘stuttering’ the letters to construct a Markov chain and applying previous results for the length of the longest matching subsequence. An explicit form for β ∊Pr(S), where β (i) denotes the proportion of letter i found in the highest-scoring word, is given. A similar treatment for Markov chains is also included. Implicit in these results is a large-deviation result for the additive functional, H ≡ Σ n &lt; τ v(Xn ), for a Markov chain stopped at the hitting time τ of some state. We give this large deviation result explicitly, for Markov chains in discrete time and in continuous time.


2001 ◽  
Vol 38 (A) ◽  
pp. 66-77
Author(s):  
Albrecht Irle ◽  
Joseph Gani

This paper considers the occurrence of patterns in sequences of independent trials from a finite alphabet; Gani and Irle (1999) have described a finite state automaton which identifies exactly those sequences of symbols containing the specific pattern, which may be thought of as the word of interest. Each word generates a particular Markov chain. Motivated by a result of Guibas and Odlyzko (1981) on stochastic monotonicity for the random times when a particular word is completed for the first time, a new level-crossing ordering is introduced for stochastic processes. A process {Yn : n = 0, 1, …} is slower in level-crossing than a process {Zn }, if it takes {Yn } stochastically longer than {Zn } to exceed any given level. This relation is shown to be useful for the comparison of stochastic automata, and is used to investigate this ordering for Markov chains in discrete time.


2019 ◽  
Vol 16 (8) ◽  
pp. 663-664 ◽  
Author(s):  
Jasleen K. Grewal ◽  
Martin Krzywinski ◽  
Naomi Altman
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document