scholarly journals A Hidden Semi-Markov Model with Duration-Dependent State Transition Probabilities for Prognostics

2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Ning Wang ◽  
Shu-dong Sun ◽  
Zhi-qiang Cai ◽  
Shuai Zhang ◽  
Can Saygin

Realistic prognostic tools are essential for effective condition-based maintenance systems. In this paper, a Duration-Dependent Hidden Semi-Markov Model (DD-HSMM) is proposed, which overcomes the shortcomings of traditional Hidden Markov Models (HMM), including the Hidden Semi-Markov Model (HSMM): (1) it allows explicit modeling of state transition probabilities between the states; (2) it relaxes observations’ independence assumption by accommodating a connection between consecutive observations; and (3) it does not follow the unrealistic Markov chain’s memoryless assumption and therefore it provides a more powerful modeling and analysis capability for real world problems. To facilitate the computation of the proposed DD-HSMM methodology, new forward-backward algorithm is developed. The demonstration and evaluation of the proposed methodology is carried out through a case study. The experimental results show that the DD-HSMM methodology is effective for equipment health monitoring and management.

1965 ◽  
Vol 2 (02) ◽  
pp. 269-285 ◽  
Author(s):  
George H. Weiss ◽  
Marvin Zelen

This paper applies the theory of semi-Markov processes to the construction of a stochastic model for interpreting data obtained from clinical trials. The model characterizes the patient as being in one of a finite number of states at any given time with an arbitrary probability distribution to describe the length of stay in a state. Transitions between states are assumed to be chosen according to a stationary finite Markov chain.Other attempts have been made to develop stochastic models of clinical trials. However, these have all been essentially Markovian with constant transition probabilities which implies that the distribution of time spent during a visit to a state is exponential (or geometric for discrete Markov chains). Markov models need also to assume that the transitions in the state of a patient depend only on absolute time whereas the semi-Markov model assumes that transitions depend on time relative to a patient. Thus the models are applicable to degenerative diseases (cancer, acute leukemia), while Markov models with time dependent transition probabilities are applicable to colds and epidemic diseases. In this paper the Laplace transforms are obtained for (i) probability of being in a state at timet, (ii) probability distribution to reach absorption state and (iii) the probability distribution of the first passage times to go from initial states to transient or absorbing states, transient to transient, and transient to absorbing. The model is applied to a clinical study of acute leukemia in which patients have been treated with methotrexate and 6-mercaptopurine. The agreement between the data and the model is very good.


Author(s):  
Shirin Kordnoori ◽  
Hamidreza Mostafaei ◽  
Shaghayegh Kordnoori ◽  
Mohammadmohsen Ostadrahimi

Semi-Markov processes can be considered as a generalization of both Markov and renewal processes. One of the principal characteristics of these processes is that in opposition to Markov models, they represent systems whose evolution is dependent not only on their last visited state but on the elapsed time since this state. Semi-Markov processes are replacing the exponential distribution of time intervals with an optional distribution. In this paper, we give a statistical approach to test the semi-Markov hypothesis. Moreover, we describe a Monte Carlo algorithm able to simulate the trajectories of the semi-Markov chain. This simulation method is used to test the semi-Markov model by comparing and analyzing the results with empirical data. We introduce the database of Network traffic which is employed for applying the Monte Carlo algorithm. The statistical characteristics of real and synthetic data from the models are compared. The comparison between the semi-Markov and the Markov models is done by computing the Autocorrelation functions and the probability density functions of the Network traffic real and simulated data as well. All the comparisons admit that the Markovian hypothesis is rejected in favor of the more general semi Markov one. Finally, the interval transition probabilities which show the future predictions of the Network traffic are given.


Author(s):  
Mohammed Alam

Background: A decision analytical model investigating cost-effectiveness of Erlotinib was submitted to the UK NICE (National Institute for Health and Care Excellence), which was not based on actual health-state transition probabilities, leading to structural uncertainty in the model. The study adopted a Markov state-transition model for investigating the cost-effectiveness of Erlotinib versus Best Supportive Care (BSC) as a maintenance therapy for patients with non-small cell lung cancer (NSCLC). Methods: Unlike manufacturer submission (MS), the Markov model was governed by transition probabilities, and allowed a negative post-progression survival (PPS) estimate to appear in later cycle. Using published summary survival data, the study employs three fixed- and time-varying approaches to estimate state transition probabilities that are used in a restructured model. Results: Post-progression probabilities and probabilities of death for Erlotinib were different than fixed-transition approaches. The best fitting curves are achieved for both PPS and probability of death across the time for which data were available, but the curves start diverging towards the end of this period. The Markov model which extrapolates the curves forward in time suggests that this difference between a time-varying and fixed-transition becomes even greater. Our models produce an ICER of £54k -£66k per QALY gain, which is comparable to an ICER presented in the MS (£55k/QALY gain). Conclusions: Results from restructured Markov models show robust cost-effectiveness results for Erlotinib vs BSC. Although these are comparable to manufacturer submissions, in terms of magnitude, they vary, and which are crucial for interventions falling near a threshold value. The study will further explore the cost-effectiveness of therapies for NSCLC in Qatar.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5331 ◽  
Author(s):  
Prasertsak Tiawongsombat ◽  
Mun-Ho Jeong ◽  
Alongkorn Pirayawaraporn ◽  
Joong-Jae Lee ◽  
Joo-Seop Yun

Attention capability is an essential component of human–robot interaction. Several robot attention models have been proposed which aim to enable a robot to identify the attentiveness of the humans with which it communicates and gives them its attention accordingly. However, previous proposed models are often susceptible to noisy observations and result in the robot’s frequent and undesired shifts in attention. Furthermore, most approaches have difficulty adapting to change in the number of participants. To address these limitations, a novel attentiveness determination algorithm is proposed for determining the most attentive person, as well as prioritizing people based on attentiveness. The proposed algorithm, which is based on relevance theory, is named the Scalable Hidden Markov Model (Scalable HMM). The Scalable HMM allows effective computation and contributes an adaptation approach for human attentiveness; unlike conventional HMMs, Scalable HMM has a scalable number of states and observations and online adaptability for state transition probabilities, in terms of changes in the current number of states, i.e., the number of participants in a robot’s view. The proposed approach was successfully tested on image sequences (7567 frames) of individuals exhibiting a variety of actions (speaking, walking, turning head, and entering or leaving a robot’s view). From these experimental results, Scalable HMM showed a detection rate of 76% in determining the most attentive person and over 75% in prioritizing people’s attention with variation in the number of participants. Compared to recent attention approaches, Scalable HMM’s performance in people attention prioritization presents an approximately 20% improvement.


2012 ◽  
Vol 605-607 ◽  
pp. 697-702
Author(s):  
Yue Zhao ◽  
Jian Jiao

Common Mode Failure (CMF) analysis is an important method for evaluating the reliability, safety and risk of complex systems. As the increasing in the system complexity, the common-mode question has become an important factor for conditioning the reliability and security of the control system. Based the discussion on the concepts of CMF, this paper provided a CMF analysis method using Markov model, including the modeling and analysis process. A case study was also presented to verify the feasibility of the analysis method.


1996 ◽  
Vol 8 (1) ◽  
pp. 178-181 ◽  
Author(s):  
David J. C. MacKay

Several authors have studied the relationship between hidden Markov models and “Boltzmann chains” with a linear or “time-sliced” architecture. Boltzmann chains model sequences of states by defining state-state transition energies instead of probabilities. In this note I demonstrate that under the simple condition that the state sequence has a mandatory end state, the probability distribution assigned by a strictly linear Boltzmann chain is identical to that assigned by a hidden Markov model.


2011 ◽  
Vol 187 ◽  
pp. 667-671
Author(s):  
Wei Chen

A recognition method of pressed protuberant characters based on Hidden Markov models and Neural Network is applied, which the surface curvature properties and the relation of metal label characters are analyzed in detail. The shape index of the characters is extracted. A neural network is used to estimate probabilities for the characters depended on the surface curvature properties, then deriving the best word choice from a sequence of state transition. It is shown in test that the proposed method can be used to recognize the pressed protuberant on metal label.


2021 ◽  
Vol 41 (4) ◽  
pp. 453-464
Author(s):  
John Graves ◽  
Shawn Garbett ◽  
Zilu Zhou ◽  
Jonathan S. Schildcrout ◽  
Josh Peterson

We discuss tradeoffs and errors associated with approaches to modeling health economic decisions. Through an application in pharmacogenomic (PGx) testing to guide drug selection for individuals with a genetic variant, we assessed model accuracy, optimal decisions, and computation time for an identical decision scenario modeled 4 ways: using 1) coupled-time differential equations (DEQ), 2) a cohort-based discrete-time state transition model (MARKOV), 3) an individual discrete-time state transition microsimulation model (MICROSIM), and 4) discrete event simulation (DES). Relative to DEQ, the net monetary benefit for PGx testing (v. a reference strategy of no testing) based on MARKOV with rate-to-probability conversions using commonly used formulas resulted in different optimal decisions. MARKOV was nearly identical to DEQ when transition probabilities were embedded using a transition intensity matrix. Among stochastic models, DES model outputs converged to DEQ with substantially fewer simulated patients (1 million) v. MICROSIM (1 billion). Overall, properly embedded Markov models provided the most favorable mix of accuracy and runtime but introduced additional complexity for calculating cost and quality-adjusted life year outcomes due to the inclusion of “jumpover” states after proper embedding of transition probabilities. Among stochastic models, DES offered the most favorable mix of accuracy, reliability, and speed.


Sign in / Sign up

Export Citation Format

Share Document