Uncertainty of Integral System Safety in Enginering

Author(s):  
Kalman Ziha

Abstract The probabilistic safety analysis evaluates system reliability and failure probability by using statistics and probability theory but it cannot estimate the system uncertainties due to variabilities of system state probabilities. The article firstly resumes how the information entropy expresses the probabilistic uncertainties due to unevenness of probability distributions of system states. Next it argues that the conditional entropy with respect to system operational and failure states appropriately describes system redundancy and robustness, respectively. Finally the article concludes that the joint probabilistic uncertainties of reliability, redundancy and robustness defines the integral system safety. The concept of integral system safety allows more comprehensive definitions of favorable system functional properties, configuration evaluation, optimization and decision making in engineering.

2014 ◽  
Vol 17 (03n04) ◽  
pp. 1450016 ◽  
Author(s):  
V. I. YUKALOV ◽  
D. SORNETTE

The idea is advanced that self-organization in complex systems can be treated as decision making (as it is performed by humans) and, vice versa, decision making is nothing but a kind of self-organization in the decision maker nervous systems. A mathematical formulation is suggested based on the definition of probabilities of system states, whose particular cases characterize the probabilities of structures, patterns, scenarios, or prospects. In this general framework, it is shown that the mathematical structures of self-organization and of decision making are identical. This makes it clear how self-organization can be seen as an endogenous decision making process and, reciprocally, decision making occurs via an endogenous self-organization. The approach is illustrated by phase transitions in large statistical systems, crossovers in small statistical systems, evolutions and revolutions in social and biological systems, structural self-organization in dynamical systems, and by the probabilistic formulation of classical and behavioral decision theories. In all these cases, self-organization is described as the process of evaluating the probabilities of macroscopic states or prospects in the search for a state with the largest probability. The general way of deriving the probability measure for classical systems is the principle of minimal information, that is, the conditional entropy maximization under given constraints. Behavioral biases of decision makers can be characterized in the same way as analogous to quantum fluctuations in natural systems.


Author(s):  
M. Vidyasagar

This chapter provides an introduction to some elementary aspects of information theory, including entropy in its various forms. Entropy refers to the level of uncertainty associated with a random variable (or more precisely, the probability distribution of the random variable). When there are two or more random variables, it is worthwhile to study the conditional entropy of one random variable with respect to another. The last concept is relative entropy, also known as the Kullback–Leibler divergence, which measures the “disparity” between two probability distributions. The chapter first considers convex and concave functions before discussing the properties of the entropy function, conditional entropy, uniqueness of the entropy function, and the Kullback–Leibler divergence.


2018 ◽  
Author(s):  
Daniel Mortlock

Mathematics is the language of quantitative science, and probability and statistics are the extension of classical logic to real world data analysis and experimental design. The basics of mathematical functions and probability theory are summarized here, providing the tools for statistical modeling and assessment of experimental results. There is a focus on the Bayesian approach to such problems (ie, Bayesian data analysis); therefore, the basic laws of probability are stated, along with several standard probability distributions (eg, binomial, Poisson, Gaussian). A number of standard classical tests (eg, p values, the t-test) are also defined and, to the degree possible, linked to the underlying principles of probability theory. This review contains 5 figures, 1 table, and 15 references. Keywords: Bayesian data analysis, mathematical models, power analysis, probability, p values, statistical tests, statistics, survey design


Author(s):  
Nicolae Brînzei ◽  
Jean-François Aubry

In this article, we propose new models and algorithms for the reliability assessment of systems relying on concepts of graphs theory. These developments exploit the order relation on the set of system components’ states which is graphically represented by the Hasse diagram. The monotony property of the reliability structure function of coherent systems allows us to obtain an ordered graph from the Hasse diagram. This ordered graph represents all the system states and it can be obtained from only the knowledge of the system tie-sets. First of all, this model gives a new way for the research of a minimal disjoint Boolean polynomial, and, second, it is able to directly find the system reliability without resorting to an intermediate Boolean polynomial. Browsing the paths from the minimal tie-sets to the maxima of the ordered graph and using a weight associated with each node, we are able to propose a new algorithm to directly obtain the reliability polynomial by the research of sub-graphs representing eligible monomials. This approach is then extended to non-coherent systems thanks to the introduction of a new concept of terminal tie-sets. These algorithms are applied to some case studies, for both coherent and non-coherent real systems, and the results compared with those computed using standard reliability block diagram or fault tree models validate the proposed approach. Formal definitions of used graphs and of developed algorithms are also given, making their software implementation easy and efficient.


Author(s):  
Aaron W. Johnson ◽  
Kevin R. Duda ◽  
Thomas B. Sheridan ◽  
Charles M. Oman

Objective: This article describes a closed-loop, integrated human–vehicle model designed to help understand the underlying cognitive processes that influenced changes in subject visual attention, mental workload, and situation awareness across control mode transitions in a simulated human-in-the-loop lunar landing experiment. Background: Control mode transitions from autopilot to manual flight may cause total attentional demands to exceed operator capacity. Attentional resources must be reallocated and reprioritized, which can increase the average uncertainty in the operator’s estimates of low-priority system states. We define this increase in uncertainty as a reduction in situation awareness. Method: We present a model built upon the optimal control model for state estimation, the crossover model for manual control, and the SEEV (salience, effort, expectancy, value) model for visual attention. We modify the SEEV attention executive to direct visual attention based, in part, on the uncertainty in the operator’s estimates of system states. Results: The model was validated using the simulated lunar landing experimental data, demonstrating an average difference in the percentage of attention ≤3.6% for all simulator instruments. The model’s predictions of mental workload and situation awareness, measured by task performance and system state uncertainty, also mimicked the experimental data. Conclusion: Our model supports the hypothesis that visual attention is influenced by the uncertainty in system state estimates. Application: Conceptualizing situation awareness around the metric of system state uncertainty is a valuable way for system designers to understand and predict how reallocations in the operator’s visual attention during control mode transitions can produce reallocations in situation awareness of certain states.


Sign in / Sign up

Export Citation Format

Share Document