Toward a Dedicated Failure Flow Arrestor Function Methodology

Author(s):  
Michael R. S. Slater ◽  
Douglas L. Van Bossuyt

Risk analysis in engineering design is of paramount importance when developing complex systems or upgrading existing systems. In many complex systems, new generations of systems are expected to have decreased risk and increased reliability when compared with previous designs. For instance, within the American civilian nuclear power industry, the Nuclear Regulatory Commission (NRC) has progressively increased requirements for reliability and driven down the chance of radiological release beyond the plant site boundary. However, many ongoing complex system design efforts analyze risk after early major architecture decisions have been made. One promising method of bringing risk considerations earlier into the conceptual stages of the complex system design process is functional failure modeling. Function Failure Identification and Propagation (FFIP) and related methods began the push toward assessing risk using the functional modeling taxonomy. This paper advances the Dedicated Failure Flow Arrestor Function (DFFAF) method which incorporates dedicated Arrestor Functions (AFs) whose purpose is to stop failure flows from propagating along uncoupled failure flow pathways, as defined by Uncoupled Failure Flow State Reasoner (UFFSR). By doing this, DFFAF provides a new tool to the functional failure modeling toolbox for complex system engineers. This paper introduces DFFAF and provides an illustrative simplified civilian Pressurized Water Reactor (PWR) nuclear power plant case study.

Author(s):  
Caitlin Stack ◽  
Douglas L. Van Bossuyt

Current methods of functional failure risk analysis do not facilitate explicit modeling of systems equipped with Prognostics and Health Management (PHM) hardware. As PHM systems continue to grow in application and popularity within major complex systems industries (e.g. aerospace, automotive, civilian nuclear power plants), implementation of PHM modeling within the functional failure modeling methodologies will become useful for the early phases of complex system design and for analysis of existing complex systems. Functional failure modeling methods have been developed in recent years to assess risk in the early phases of complex system design. However, the methods of functional modeling have yet to include an explicit method for analyzing the effects of PHM systems on system failure probabilities. It is common practice within the systems health monitoring industry to design the PHM subsystems during the later stages of system design — typically after most major system architecture decisions have been made. This practice lends itself to the omission of considering PHM effects on the system during the early stages of design. This paper proposes a new method for analyzing PHM subsystems’ contribution to risk reduction in the early stages of complex system design. The Prognostic Systems Variable Configuration Comparison (PSVCC) eight-step method developed here expands upon existing methods of functional failure modeling by explicitly representing PHM subsystems. A generic pressurized water nuclear reactor primary coolant loop system is presented as a case study to illustrate the proposed method. The success of the proposed method promises more accurate modeling of complex systems equipped with PHM subsystems in the early phases of design.


Author(s):  
Guillaume L’Her ◽  
Douglas L. Van Bossuyt ◽  
Bryan M. O’Halloran

Prognostics and Health Management (PHM) systems are usually only considered and set up in the late stage of design or even during the system’s lifetime, after the major design decision have been made. However, considering the PHM system’s impact on the system failure probabilities can benefit the system design early on and subsequently reduce costs. The identification of failure paths in the early phases of engineering design can guide the designer toward a safer, more reliable and cost-efficient design. Several functional failure modeling methods have been developed recently. One of their advantages is to allow for risk assessment in the early stages of the design. Risk and reliability functional failure analysis methods currently developed do not explicitly model the PHM equipment used to identify and prevent potential system failures. This paper proposes a framework to optimize prognostic systems selection and positioning during the early stages of a complex system design. A Bayesian network, incorporating the PHM systems, is used to analyze the functional model and failure propagation. The algorithm developed within the proposed framework returns the optimized placement of PHM hardware in the complex system, allowing the designer to evaluate the need for system improvement. A design tool was developed to automatically apply the proposed method. A generic pressurized water nuclear reactor primary coolant loop system is used to present a case study illustrating the proposed framework. The results obtained for this particular case study demonstrate the promise of the method introduced in this paper. The case study notably exhibits how the proposed framework can be used to support engineering design teams in making better informed decisions early in the design phase.


2016 ◽  
Vol 3 (1) ◽  
Author(s):  
Kirsten Sinclair ◽  
Daniel Livingstone

Difficulty understanding the large number of interactions involved in complex systems makes their successful engineering a problem. Petri Nets are one graphical modelling technique used to describe and check proposed designs of complex systems thoroughly. While automatic analysis capabilities of Petri Nets are useful, their visual form is less so, particularly for communicating the design they represent. In engineering projects, this can lead to a gap in communications between people with different areas of expertise, negatively impacting achieving accurate designs.In contrast, although capable of representing a variety of real and imaginary objects effectively, behaviour of serious games can only be analysed manually through interactive simulation. This paper examines combining the complementary strengths of Petri Nets and serious games. The novel contribution of this work is a serious game prototype of a complex system design that has been checked thoroughly. Underpinned by Petri Net analysis, the serious game can be used as a high-level interface to communicate and refine the design.Improvement of a complex system design is demonstrated by applying the integration to a proof-of-concept case study.   


Author(s):  
Farzaneh Farhangmehr ◽  
Irem Y. Tumer

The design and development cycle for complex systems is full of uncertainty, commonly recognized as the main source of risk in organizations engaged in design and development. One of the challenges for such organizations is assessing how much risk (cost, schedule, scope) they can take on and still remain competitive. The risk associated with the design of complex systems is fundamentally tied to uncertainty, which may lead to suboptimal performance or failure if unmanaged. By understanding the sources of uncertainty in all stages of complex system design, decision-makers can make more informed choices and identify “hotspots” for reducing risks due to uncertainty by reallocating resources, adding safeguards, etc. There are two major categories of uncertainty (certain uncertainty) classification in the design of complex systems: Knowledge/epistemic uncertainty and Variability/Aleatory uncertainty. The intersection of these two sets is ambiguity uncertainty and the outside is what we don’t know we don’t know (uncertain uncertainty). By setting detailed definitions, we can reduce the ambiguity uncertainty. Furthermore, we can subdivide knowledge uncertainty into model, ambiguity and behavioral uncertainty, and subdivide variability uncertainty into natural randomness, ambiguity and behavioral uncertainty. We can go further and find subcategories for model and behavioral uncertainty. Using this classification for uncertainty, this paper proposes the “Capture, Assessment and Communication Tool for Uncertainty Simulation” (CACTUS) for assessing, capturing, and communicating risks due to uncertainty during complex system design. CACTUS has columns to identify sources, location, severity and importance of uncertainty in stages of design. By applying CACTUS, decision-makers will be able to find answers to the following questions for each type of uncertainty included in the design process: 1 - Where is uncertainty from? (i.e., Sources); 2 - In which stages of design does uncertainty appear? (i.e., Location); 3 - What is its severity?; and, 4 - What is its importance? The hypothesis of this research is that, by using CACTUS, design organizations can capture, assess, and efficiently and effectively communicate uncertainty through their design processes, and as a result, improve their capacity for delivering complex systems that meet cost, schedule, and performance objectives. The fundamental steps of the methodology are illustrated by using a concurrent design case study from NASA’s Project Design Center.


2015 ◽  
Vol 138 (1) ◽  
Author(s):  
Jesse Austin-Breneman ◽  
Bo Yang Yu ◽  
Maria C. Yang

During the early stage design of large-scale engineering systems, design teams are challenged to balance a complex set of considerations. The established structured approaches for optimizing complex system designs offer strategies for achieving optimal solutions, but in practice suboptimal system-level results are often reached due to factors such as satisficing, ill-defined problems, or other project constraints. Twelve subsystem and system-level practitioners at a large aerospace organization were interviewed to understand the ways in which they integrate subsystems in their own work. Responses showed subsystem team members often presented conservative, worst-case scenarios to other subsystems when negotiating a tradeoff as a way of hedging against their own future needs. This practice of biased information passing, referred to informally by the practitioners as adding “margins,” is modeled in this paper with a series of optimization simulations. Three “bias” conditions were tested: no bias, a constant bias, and a bias which decreases with time. Results from the simulations show that biased information passing negatively affects both the number of iterations needed and the Pareto optimality of system-level solutions. Results are also compared to the interview responses and highlight several themes with respect to complex system design practice.


Author(s):  
Joseph R. Piacenza ◽  
Kenneth John Faller ◽  
Mir Abbas Bozorgirad ◽  
Eduardo Cotilla-Sanchez ◽  
Christopher Hoyle ◽  
...  

Abstract Robust design strategies continue to be relevant during concept-stage complex system design to minimize the impact of uncertainty in system performance due to uncontrollable external failure events. Historical system failures such as the 2003 North American blackout and the 2011 Arizona-Southern California Outages show that decision making, during a cascading failure, can significantly contribute to a failure's magnitude. In this paper, a scalable, model-based design approach is presented to optimize the quantity and location of decision-making agents in a complex system, to minimize performance loss variability after a cascading failure, regardless of where the fault originated in the system. The result is a computational model that enables designers to explore concept-stage design tradeoffs based on individual risk attitudes (RA) for system performance and performance variability, after a failure. The IEEE RTS-96 power system test case is used to evaluate this method, and the results reveal key topological locations vulnerable to cascading failures, that should not be associated with critical operations. This work illustrates the importance of considering decision making when evaluating system level tradeoffs, supporting robust design.


Sign in / Sign up

Export Citation Format

Share Document