scholarly journals Research on Disability Grading Based on ICF Functional Framework: Empirical Evidence From Zhejiang Province, China

2021 ◽  
Vol 9 ◽  
Author(s):  
Huan Liu

Through assignment method, the total score of disability in multiple dimensions is obtained, and it is divided into five functional states—severe disability, partial disability, moderate disability, mild disability, and health—according to the score, and the probability of death is constructed. Using the Chinese Longitudinal Healthy Longevity Survey (CLHLS) database tracking survey data, by constructing a multistate transition probability matrix, the empirical calculation of the multistate disability transfer probability, with the help of the sixth national census data, we estimated maintenance time of each state, life expectancy, etc. The results show that the 3 year transfer probability of the initial healthy elderly is the highest, and the mortality rate is also the lowest. It can be found that the disability state transition probability measurement based on the data is more accurate than the model estimation; the disability scale and life expectancy estimated based on the multistate transition probability matrix are more reliable.

1969 ◽  
Vol 6 (03) ◽  
pp. 478-492 ◽  
Author(s):  
William E. Wilkinson

Consider a discrete time Markov chain {Zn } whose state space is the non-negative integers and whose transition probability matrix ║Pij ║ possesses the representation where {Pr }, r = 1,2,…, is a finite or denumerably infinite sequence of non-negative real numbers satisfying , and , is a corresponding sequence of probability generating functions. It is assumed that Z 0 = k, a finite positive integer.


2021 ◽  
pp. 107754632198920
Author(s):  
Zeinab Fallah ◽  
Mahdi Baradarannia ◽  
Hamed Kharrati ◽  
Farzad Hashemzadeh

This study considers the designing of the H ∞ sliding mode controller for a singular Markovian jump system described by discrete-time state-space realization. The system under investigation is subject to both matched and mismatched external disturbances, and the transition probability matrix of the underlying Markov chain is considered to be partly available. A new sufficient condition is developed in terms of linear matrix inequalities to determine the mode-dependent parameter of the proposed quasi-sliding surface such that the stochastic admissibility with a prescribed H ∞ performance of the sliding mode dynamics is guaranteed. Furthermore, the sliding mode controller is designed to assure that the state trajectories of the system will be driven onto the quasi-sliding surface and remain in there afterward. Finally, two numerical examples are given to illustrate the effectiveness of the proposed design algorithms.


Author(s):  
Jin Zhu ◽  
Kai Xia ◽  
Geir E Dullerud

Abstract This paper investigates the quadratic optimal control problem for constrained Markov jump linear systems with incomplete mode transition probability matrix (MTPM). Considering original system mode is not accessible, observed mode is utilized for asynchronous controller design where mode observation conditional probability matrix (MOCPM), which characterizes the emission between original modes and observed modes is assumed to be partially known. An LMI optimization problem is formulated for such constrained hidden Markov jump linear systems with incomplete MTPM and MOCPM. Based on this, a feasible state-feedback controller can be designed with the application of free-connection weighting matrix method. The desired controller, dependent on observed mode, is an asynchronous one which can minimize the upper bound of quadratic cost and satisfy restrictions on system states and control variables. Furthermore, clustering observation where observed modes recast into several clusters, is explored for simplifying the computational complexity. Numerical examples are provided to illustrate the validity.


2016 ◽  
Vol 138 (6) ◽  
Author(s):  
Thai Duong ◽  
Duong Nguyen-Huu ◽  
Thinh Nguyen

Markov decision process (MDP) is a well-known framework for devising the optimal decision-making strategies under uncertainty. Typically, the decision maker assumes a stationary environment which is characterized by a time-invariant transition probability matrix. However, in many real-world scenarios, this assumption is not justified, thus the optimal strategy might not provide the expected performance. In this paper, we study the performance of the classic value iteration algorithm for solving an MDP problem under nonstationary environments. Specifically, the nonstationary environment is modeled as a sequence of time-variant transition probability matrices governed by an adiabatic evolution inspired from quantum mechanics. We characterize the performance of the value iteration algorithm subject to the rate of change of the underlying environment. The performance is measured in terms of the convergence rate to the optimal average reward. We show two examples of queuing systems that make use of our analysis framework.


Equilibrium ◽  
2015 ◽  
Vol 10 (1) ◽  
pp. 33 ◽  
Author(s):  
Andrzej Cieślik ◽  
Łukasz Goczek

In this paper, we study the evolution of corruption patterns in 27 post-communist countries during the period 1996-2012 using the Control of Corruption Index and the corruption category Markov transition probability matrix. This method allows us to generate the long-run distribution of corruption among the post-communist countries. Our empirical findings suggest that corruption in the post-communist countries is a very persistent phenomenon that does not change much over time. Several theoretical explanations for such a result are provided.


2018 ◽  
Vol 10 (06) ◽  
pp. 1850073
Author(s):  
Kardi Teknomo

Ideal flow network is a strongly connected network with flow, where the flows are in steady state and conserved. The matrix of ideal flow is premagic, where vector, the sum of rows, is equal to the transposed vector containing the sum of columns. The premagic property guarantees the flow conservation in all nodes. The scaling factor as the sum of node probabilities of all nodes is equal to the total flow of an ideal flow network. The same scaling factor can also be applied to create the identical ideal flow network, which has from the same transition probability matrix. Perturbation analysis of the elements of the stationary node probability vector shows an insight that the limiting distribution or the stationary distribution is also the flow-equilibrium distribution. The process is reversible that the Markov probability matrix can be obtained from the invariant state distribution through linear algebra of ideal flow matrix. Finally, we show that recursive transformation [Formula: see text] to represent [Formula: see text]-vertices path-tracing also preserved the properties of ideal flow, which is irreducible and premagic.


Sign in / Sign up

Export Citation Format

Share Document