Multiple Random Variables and Random Process

2017 ◽  
pp. 95-126
Author(s):  
Shaila Dinkar Apte
2020 ◽  
Vol 175 ◽  
pp. 12006 ◽  
Author(s):  
Ilona Avlasenko ◽  
Lyudmila Avlasenko ◽  
Isa Peshkhoev ◽  
Yuri Podkolzin ◽  
Oksana Savelyeva

Simulation mathematical model of small enterprise functioning is under analysis in this article. It is assumed that annual working capital profitability and loan rate are random variables with normal distribution, the amount of borrowed capital does not exceed the amount of own working capital. Given the value of its working capital at the beginning of time, its dependence on time is constructed as a random process. The parameters of random variables are estimated based on the processing of statistical data on the previous activities of this enterprise. The implementation of the random process is statistically modeled. With the help of the statistical tests, implementations of random function of growth of own working capital are built and the probability of bankruptcy is estimated as relative frequency of cases of adoption of negative value by random function. It is proposed to use the built simulation model of the enterprise to estimate the probability of bankruptcy of the studied enterprise in the coming period (a given number ofyears).


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Dezhang Sun ◽  
Xu Wang ◽  
Baitao Sun

Issues of load combinations of earthquakes and heavy trucks are important contents in multihazards bridge design. Currentload resistance factor design(LRFD)specificationsusually treat extreme hazards alone and have no probabilistic basis in extreme load combinations. Earthquake load and heavy truck load are considered as random processes with respective characteristics, and the maximum combined load is not the simple superimposition of their maximum loads. Traditional Ferry Borges-Castaneda model that considers load lasting duration and occurrence probability well describes random process converting to random variables and load combinations, but this model has strict constraint in time interval selection to obtain precise results. Turkstra’s rule considers one load reaching its maximum value in bridge’s service life combined with another load with its instantaneous value (or mean value), which looks more rational, but the results are generally unconservative. Therefore, a modified model is presented here considering both advantages of Ferry Borges-Castaneda's model and Turkstra’s rule. The modified model is based on conditional probability, which can convert random process to random variables relatively easily and consider the nonmaximum factor in load combinations. Earthquake load and heavy truck load combinations are employed to illustrate the model. Finally, the results of a numerical simulation are used to verify the feasibility and rationality of the model.


2015 ◽  
Vol 23 (3) ◽  
Author(s):  
Rostyslav E. Yamnenko

AbstractThe paper is devoted to the study of sub-Gaussian random variables and stochastic processes. Recall that along with centered Gaussian random variables the space Sub(Ω) of sub-Gaussian random variables contains all bounded zero-mean random variables and all zero-mean random variables whose distribution tails decrease no slower than the tails of the distribution of a Gaussian random variable. Here we study a square deviation of a sub-Gaussian random process from a constant and derive an upper estimate for the exponential moment of the deviation. The obtained result allows to estimate the distribution of deviation of a sub-Gaussian random process from some measurable function in the norm of


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Sandro Heiniger ◽  
Hugues Mercier

Abstract We design, describe and implement a statistical engine to analyze the performance of gymnastics judges with three objectives: (1) provide constructive feedback to judges, executive committees and national federations; (2) assign the best judges to the most important competitions; (3) detect bias and persistent misjudging. Judging a gymnastics routine is a random process, and we model this process using heteroscedastic random variables. The developed marking score scales the difference between the mark of a judge and the true performance level of a gymnast as a function of the intrinsic judging error variability estimated from historical data for each apparatus. This dependence between judging variability and performance quality has never been properly studied. We leverage the intrinsic judging error variability and the marking score to detect outlier marks and study the national bias of judges favoring athletes of the same nationality. We also study ranking scores assessing to what extent judges rate gymnasts in the correct order. Our main observation is that there are significant differences between the best and worst judges, both in terms of accuracy and national bias. The insights from this work have led to recommendations and rule changes at the Fédération Internationale de Gymnastique.


2017 ◽  
Vol 20 (K2) ◽  
pp. 101-106
Author(s):  
Dam Ton Duong ◽  
Cuong Kien Dang

The paper reviews the hydrometeorological data analysis (precipitation, flow, water level, etc.) to evaluate and predict mutations such as flood, drought and saline extremes to reduce the impact of climate change on the economy and life. The main method for solving the problem posed is the max-domain of attraction of extreme distributions with the Gumbel copula of random variables related to hydrometeorological data. Results presented in this paper are reviewed and verified through data supplied by hydrometeorological stations at the Tan Chau and Chau Doc districts, An Giang province from 1990 to present


Author(s):  
Dorin Drignei ◽  
Igor Baseski ◽  
Zissimos P. Mourelatos ◽  
Vijitashwa Pandey

A new metamodeling approach is proposed to characterize the output (response) random process of a dynamic system with random variables, excited by input random processes. The metamodel is then used to efficiently estimate the time-dependent reliability. The input random processes are decomposed using principal components or wavelets and a few simulations are used to estimate the distributions of the decomposition coefficients. A similar decomposition is performed on the output random process. A Kriging model is then built between the input and output decomposition coefficients and is used subsequently to quantify the output random process corresponding to a realization of the input random variables and random processes. In our approach, the system input is not deterministic but random. We establish therefore, a surrogate model between the input and output random processes. The quantified output random process is finally used to estimate the time-dependent reliability or probability of failure using the total probability theorem. The proposed method is illustrated with a corroding beam example.


1986 ◽  
Vol 23 (04) ◽  
pp. 1013-1018
Author(s):  
B. G. Quinn ◽  
H. L. MacGillivray

Sufficient conditions are presented for the limiting normality of sequences of discrete random variables possessing unimodal distributions. The conditions are applied to obtain normal approximations directly for the hypergeometric distribution and the stationary distribution of a special birth-death process.


1985 ◽  
Vol 24 (03) ◽  
pp. 120-130 ◽  
Author(s):  
E. Brunner ◽  
N. Neumann

SummaryThe mathematical basis of Zelen’s suggestion [4] of pre randomizing patients in a clinical trial and then asking them for their consent is investigated. The first problem is to estimate the therapy and selection effects. In the simple prerandomized design (PRD) this is possible without any problems. Similar observations have been made by Anbar [1] and McHugh [3]. However, for the double PRD additional assumptions are needed in order to render therapy and selection effects estimable. The second problem is to determine the distribution of the statistics. It has to be taken into consideration that the sample sizes are random variables in the PRDs. This is why the distribution of the statistics can only be determined asymptotically, even under the assumption of normal distribution. The behaviour of the statistics for small samples is investigated by means of simulations, where the statistics considered in the present paper are compared with the statistics suggested by Ihm [2]. It turns out that the statistics suggested in [2] may lead to anticonservative decisions, whereas the “canonical statistics” suggested by Zelen [4] and considered in the present paper keep the level quite well or may lead to slightly conservative decisions, if there are considerable selection effects.


Sign in / Sign up

Export Citation Format

Share Document