OPTIMAL MAINTENANCE OF AIRFRAME CRACKS

Author(s):  
KODO ITO ◽  
TOSHIO NAKAGAWA

As an airframe has finite lifetime and has to be designed lightweight, the maintenance of airframe is indispensable to operate aircraft without any serious troubles. After an airframe begins to operate, it suffers stresses and the stress causes the damage such as cracks of the airframe. Cracks grow with operation time and cause catastrophic phenomenon such as the mid-air disintegration when they become greater than a critical size. So, the managerial crack size is prespecified and Preventive Maintenance (PM) undergoes when the inspected crack size exceeds it. In this paper, optimal PM policies of airframe crack failure are discussed. Airframe states are represented as the Markov renewal process, and one-step transition probabilities are discussed. The total expected cost from the start of operation to the end by failure is defined and the optimal PM policies which minimize it is discussed.

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 729
Author(s):  
Miquel Montero

Random walks with invariant loop probabilities comprise a wide family of Markov processes with site-dependent, one-step transition probabilities. The whole family, which includes the simple random walk, emerges from geometric considerations related to the stereographic projection of an underlying geometry into a line. After a general introduction, we focus our attention on the elliptic case: random walks on a circle with built-in reflexing boundaries.


1977 ◽  
Vol 14 (02) ◽  
pp. 298-308 ◽  
Author(s):  
Peter R. Nelson

In a single-shelf library having infinitely many books B 1 , B 2 , …, the probability of selecting each book is assumed known. Books are removed one at a time and replaced in position k prior to the next removal. Books are moved either to the right or the left as is necessary to vacate position k. Those arrangements of books where after some finite position all the books are in natural order (book i occupies position i) are considered as states in an infinite Markov chain. When k > 1, we show that the chain can never be positive recurrent. When k = 1, we find the limits of ratios of one-step transition probabilities; and when k = 1 and the chain is transient, we find the Martin exit boundary.


1988 ◽  
Vol 20 (01) ◽  
pp. 99-111 ◽  
Author(s):  
Nico M. Van Dijk

Consider a perturbation in the one-step transition probabilities and rewards of a discrete-time Markov reward process with an unbounded one-step reward function. A perturbation estimate is derived for the finite horizon and average reward function. Results from [3] are hereby extended to the unbounded case. The analysis is illustrated for one- and two-dimensional queueing processes by an M/M/1-queue and an overflow queueing model with an error bound in the arrival rate.


1972 ◽  
Vol 4 (2) ◽  
pp. 133-146 ◽  
Author(s):  
G Gilbert

This paper develops two mathematical models of housing turnover in a neighborhood. The first of these draws upon the theory of non-homogeneous Markov processes and includes the effects of present neighborhood composition upon future turnover probabilities. The second model considers the turnover process as a Markov renewal process and therefore allows the inclusion of length of occupancy as a determinant of transition probabilities. Example calculations for both models are included, and procedures for using the models are outlined.


Author(s):  
Igor Vitalievich Kotenko ◽  
Igor Borisovich Parashchuk

The object of research is the process of detecting harmful information in the social networks and global network. There has been proposed the approach to verifying the parameters of a mathematical model of a random process of detecting malicious information with the unreliable, inaccurately (contradictory) given initial data. The approach is based on using stochastic equations of state and observation that are based on controlled Markov chains in finite differences. At the same time, verification of key parameters of a mathematical model of this type - elements of a matrix of one-step transition probabilities - is performed by using an extrapolating neural network. This allows to take into account and compensate the inaccuracy of the original data inherent in random processes of searching and detecting malicious information, as well as to increase the accuracy of decision-making on the assessment and categorization of digital network content to detect and counter information of this class.


1999 ◽  
Vol 12 (4) ◽  
pp. 371-392
Author(s):  
Bong Dae Choi ◽  
Sung Ho Choi ◽  
Dan Keun Sung ◽  
Tae-Hee Lee ◽  
Kyu-Seog Song

We analyze the transient behavior of a Markovian arrival queue with congestion control based on a double of thresholds, where the arrival process is a queue-length dependent Markovian arrival process. We consider Markov chain embedded at arrival epochs and derive the one-step transition probabilities. From these results, we obtain the mean delay and the loss probability of the nth arrival packet. Before we study this complex model, first we give a transient analysis of an MAP/M/1 queueing system without congestion control at arrival epochs. We apply our result to a signaling system No. 7 network with a congestion control based on thresholds.


1983 ◽  
Vol 20 (1) ◽  
pp. 178-184 ◽  
Author(s):  
Harry Cohn

A Borel–Cantelli-type property in terms of one-step transition probabilities is given for events like {|Xn+1| > a + ε, |Xn|≦a}, a and ε being two positive numbers. Applications to normed sums of i.i.d. random variables with infinite mean and branching processes in varying environment with or without immigration are derived.


1969 ◽  
Vol 6 (3) ◽  
pp. 704-707 ◽  
Author(s):  
Thomas L. Vlach ◽  
Ralph L. Disney

The departure process from the GI/G/1 queue is shown to be a semi-Markov process imbedded at departure points with a two-dimensional state space. Transition probabilities for this process are defined and derived from the distributions of the arrival and service processes. The one step transition probabilities and a stationary distribution are obtained for the imbedded two-dimensional Markov chain.


1985 ◽  
Vol 22 (02) ◽  
pp. 324-335 ◽  
Author(s):  
J. M. McNamara

This paper discusses a renewal process whose time development between renewals is described by a Markov process. The process may be controlled by choosing the times at which renewal occurs, the objective of the control being to maximise the long-term average rate of reward. Let γ ∗ denote the maximum achievable rate. We consider a specific policy in which a sequence of estimates of γ ∗ is made. This sequence is defined inductively as follows. Initially an (a priori)estimate γo is chosen. On making the nth renewal one estimates γ ∗ in terms of γ o, the total rewards obtained in the first n renewal cycles and the total length of these cycles. γ n then determines the length of the (n + 1)th cycle. It is shown that γ n tends to γ ∗ as n tends to∞, and that this policy is optimal. The time at which the (n + 1)th renewal is made is determined by solving a stopping problem for the Markov process with continuation cost γ n per unit time and stopping reward equal to the renewal reward. Thus, in general, implementation of this policy requires a knowledge of the transition probabilities of the Markov process. An example is presented in which one needs to know essentially nothing about the details of this process or the fine details of the reward structure in order to implement the policy. The example is based on a problem in biology.


1977 ◽  
Vol 14 (2) ◽  
pp. 298-308 ◽  
Author(s):  
Peter R. Nelson

In a single-shelf library having infinitely many books B1, B2, …, the probability of selecting each book is assumed known. Books are removed one at a time and replaced in position k prior to the next removal. Books are moved either to the right or the left as is necessary to vacate position k. Those arrangements of books where after some finite position all the books are in natural order (book i occupies position i) are considered as states in an infinite Markov chain. When k > 1, we show that the chain can never be positive recurrent. When k = 1, we find the limits of ratios of one-step transition probabilities; and when k = 1 and the chain is transient, we find the Martin exit boundary.


Sign in / Sign up

Export Citation Format

Share Document