scholarly journals Dynamic Information Design: A Simple Problem on Optimal Sequential Information Disclosure

Author(s):  
Farzaneh Farhadi ◽  
Demosthenis Teneketzis

AbstractWe study a dynamic information design problem in a finite-horizon setting consisting of two strategic and long-term optimizing agents, namely a principal (he) and a detector (she). The principal observes the evolution of a Markov chain that has two states, one “good” and one “bad” absorbing state, and has to decide how to sequentially disclose information to the detector. The detector’s only information consists of the messages she receives from the principal. The detector’s objective is to detect as accurately as possible the time of the jump from the good to the bad state. The principal’s objective is to delay the detector as much as possible from detecting the jump to the bad state. For this setting, we determine the optimal strategies of the principal and the detector. The detector’s optimal strategy is described by time-varying thresholds on her posterior belief of the good state. We prove that it is optimal for the principal to give no information to the detector before a time threshold, run a mixed strategy to confuse the detector at the threshold time, and reveal the true state afterward. We present an algorithm that determines both the optimal time threshold and the optimal mixed strategy that could be employed by the principal. We show, through numerical experiments, that this optimal sequential mechanism outperforms any other information disclosure strategy presented in the literature. We also show that our results can be extended to the infinite-horizon problem, to the problem where the matrix of transition probabilities of the Markov chain is time-varying, and to the case where the Markov chain has more than two states and one of the states is absorbing.

Author(s):  
R. Jamuna

CpG islands (CGIs) play a vital role in genome analysis as genomic markers.  Identification of the CpG pair has contributed not only to the prediction of promoters but also to the understanding of the epigenetic causes of cancer. In the human genome [1] wherever the dinucleotides CG occurs the C nucleotide (cytosine) undergoes chemical modifications. There is a relatively high probability of this modification that mutates C into a T. For biologically important reasons the mutation modification process is suppressed in short stretches of the genome, such as ‘start’ regions. In these regions [2] predominant CpG dinucleotides are found than elsewhere. Such regions are called CpG islands. DNA methylation is an effective means by which gene expression is silenced. In normal cells, DNA methylation functions to prevent the expression of imprinted and inactive X chromosome genes. In cancerous cells, DNA methylation inactivates tumor-suppressor genes, as well as DNA repair genes, can disrupt cell-cycle regulation. The most current methods for identifying CGIs suffered from various limitations and involved a lot of human interventions. This paper gives an easy searching technique with data mining of Markov Chain in genes. Markov chain model has been applied to study the probability of occurrence of C-G pair in the given   gene sequence. Maximum Likelihood estimators for the transition probabilities for each model and analgously for the  model has been developed and log odds ratio that is calculated estimates the presence or absence of CpG is lands in the given gene which brings in many  facts for the cancer detection in human genome.


Author(s):  
Benoit Duvocelle ◽  
János Flesch ◽  
Hui Min Shi ◽  
Dries Vermeulen

AbstractWe consider a discrete-time dynamic search game in which a number of players compete to find an invisible object that is moving according to a time-varying Markov chain. We examine the subgame perfect equilibria of these games. The main result of the paper is that the set of subgame perfect equilibria is exactly the set of greedy strategy profiles, i.e. those strategy profiles in which the players always choose an action that maximizes their probability of immediately finding the object. We discuss various variations and extensions of the model.


Risks ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 37
Author(s):  
Manuel L. Esquível ◽  
Gracinda R. Guerreiro ◽  
Matilde C. Oliveira ◽  
Pedro Corte Real

We consider a non-homogeneous continuous time Markov chain model for Long-Term Care with five states: the autonomous state, three dependent states of light, moderate and severe dependence levels and the death state. For a general approach, we allow for non null intensities for all the returns from higher dependence levels to all lesser dependencies in the multi-state model. Using data from the 2015 Portuguese National Network of Continuous Care database, as the main research contribution of this paper, we propose a method to calibrate transition intensities with the one step transition probabilities estimated from data. This allows us to use non-homogeneous continuous time Markov chains for modeling Long-Term Care. We solve numerically the Kolmogorov forward differential equations in order to obtain continuous time transition probabilities. We assess the quality of the calibration using the Portuguese life expectancies. Based on reasonable monthly costs for each dependence state we compute, by Monte Carlo simulation, trajectories of the Markov chain process and derive relevant information for model validation and premium calculation.


2004 ◽  
Vol 2004 (8) ◽  
pp. 421-429 ◽  
Author(s):  
Souad Assoudou ◽  
Belkheir Essebbar

This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC) techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.


Author(s):  
Peter L. Chesson

AbstractRandom transition probability matrices with stationary independent factors define “white noise” environment processes for Markov chains. Two examples are considered in detail. Such environment processes can be used to construct several Markov chains which are dependent, have the same transition probabilities and are jointly a Markov chain. Transition rates for such processes are evaluated. These results have application to the study of animal movements.


2021 ◽  
Vol 29 (2) ◽  
pp. 102-115
Author(s):  
Hyo-Chan Lee ◽  
Seyoung Park ◽  
Jong Mun Yoon

Abstract This study aims to generalize the following result of McDonald and Siegel (1986) on optimal investment: it is optimal for an investor to invest when project cash flows exceed a certain threshold. This study presents other results that refine or extend this one by integrating timing flexibility and changes in cash flows with time-varying transition probabilities for regime switching. This study emphasizes that optimal thresholds are either overvalued or undervalued in the absence of time-varying transition probabilities. Accordingly, the stochastic nature of transition probabilities has important implications to the search for optimal timing of investment.


2009 ◽  
Vol 43 (1) ◽  
pp. 81-90 ◽  
Author(s):  
Jean-Luc Guilbault ◽  
Mario Lefebvre

Abstract The so-called gambler’s ruin problem in probability theory is considered for a Markov chain having transition probabilities depending on the current state. This problem leads to a non-homogeneous difference equation with non-constant coefficients for the expected duration of the game. This mathematical expectation is computed explicitly.


Sign in / Sign up

Export Citation Format

Share Document