scholarly journals Dynamic Positioning Reliability Index (DP-RI) and Offline Forecasting of DP-RI During Complex Marine Operations

Author(s):  
Charles Fernandez ◽  
Shashi Bhushan Kumar ◽  
Wai Lok Woo ◽  
Rosemary Norman ◽  
Arun Kr. Dev

The Dynamic Positioning (DP) System is a complex system with significant levels of integration between many sub-systems to perform diverse control functions. The extent of information managed by each sub-system is enormous. The complex level of integration between sub-systems creates more possible failure scenarios. A systematic analysis of all failure scenarios is tedious and for an operator to handle any such catastrophic situation is breath taking. There are many accidents where a failure in a DP system has resulted in fatalities and environmental pollution. Therefore, reliability assessment of a DP system is critical for safe and efficient operation of marine and offshore vessels. Traditionally, the reliability of a DP system is assessed during the design stage by methodologies such as Failure Mode Effects and Analysis (FMEA), Proving Trials, Hardware In-the Loop (HIL) testing, Site-Specific Risk Analysis, DP capability Analysis and during operation by annual trials to verify functionality. All these methods are time consuming, involving a lot of human effort and notably no analysis of previous accidents are indicated in the reliability assessment. This imposes in-built uncertainty and risk in DP system during operation. In this paper, a new concept of Dynamic Positioning Reliability Index (DP-RI) is introduced and a state-of-the-art advisory decision making tool is proposed. This tool is developed based on information from various sources including Offshore Reliability Data (OREDA), International Marine Contractors Association (IMCA) Accident database, DP vendor equipment failure databases, DP System supplier’s manuals, previous system level FMEA and HIL testing results, Site specific risk analysis documents, Project design specification and Operator’s operational experiences. Thus, DP-RI addresses the pitfalls of existing reliability assessment methods and will be an efficient tool in reducing the number of DP-related accidents.

Author(s):  
Zahra Mohaghegh ◽  
Mohammad Modarres ◽  
Aris Christou

The modeling of dependent failures, specifically Common Cause Failures (CCFs), is one of the most important topics in Probabilistic Risk Analysis (PRA). Currently, CCFs are treated using parametric methods, which are based on historical failure events. Instead of utilizing these existing data-driven approaches, this paper proposes using physics-based CCF modeling which refers to the incorporation of underlying physical failure mechanisms into risk models so that the root causes of dependencies can be “explicitly” included. This requires building a theoretical foundation for the integration of Probabilistic Physics-Of-Failure (PPOF) models into PRA in a way that the interactions of failure mechanisms and, ultimately, the dependencies between the multiple component failures are depicted. To achieve this goal, this paper highlights the following methodological steps (1) modeling the individual failure mechanisms (e.g. fatigue and wear) of two dependent components, (2) applying a mechanistic approach to deterministically model the interactions of their failure mechanisms, (3) utilizing probabilistic sciences (e.g. uncertainty modeling, Bayesian analysis) in order to make the model of interactions probabilistic, and (4) developing appropriate modeling techniques to link the physics-based CCF models to the system-level PRA. The proposed approach is beneficial for (a) reducing CCF occurrence in currently operating plants and (b) modeling CCFs for plants in the design stage.


Author(s):  
Peter Leung ◽  
Kosuke Ishii ◽  
Jan Benson ◽  
Jeffrey Abell

This paper describes a method to identify and to evaluate the risks associated with task transfer in a globally distributed engineering environment. Enterprises realize the importance of global worksharing to include diverse customer values into product, but this paradigm also introduces challenges in product development. Industry-wide interviews reveal workshare risks are at two levels: system and component. This paper presents a three-step risk analysis to 1.) characterize product development work tasks, 2.) define Distributed Component Development Risk based on historical rework data, and 3.) evaluate workshare scenarios for the task transfer plan. Three interior vehicle components illustrate the steps of the risk analysis, and the findings indicate most rework happens at the system-level design stage while the discovery of these errors occurs during validation and manufacturing. As a result, the transfer of these tasks leads to high likelihood of rework. Currently, this method is applying in actual global automotive programs for validations.


Author(s):  
Eugene Babeshko ◽  
Ievgenii Bakhmach ◽  
Vyacheslav Kharchenko ◽  
Eugene Ruchkov ◽  
Oleksandr Siora

Operating reliability assessment of instrumentation and control systems (I&Cs) is always one of the most important activities, especially for critical domains like nuclear power plants (NPPs). Intensive use of relatively new technologies like field programmable gate arrays (FPGAs) in I&C which appear in upgrades and in newly built NPPs makes task to develop and validate advanced operating reliability assessment methods that consider specific technology features very topical. Increased integration densities make the reliability of integrated circuits the most crucial point in modern NPP I&Cs. Moreover, FPGAs differ in some significant ways from other integrated circuits: they are shipped as blanks and are very dependent on design configured into them. Furthermore, FPGA design could be changed during planned NPP outage for different reasons. Considering all possible failure modes of FPGA-based NPP I&C at design stage is a quite challenging task. Therefore, operating reliability assessment is one of the most preferable ways to perform comprehensive analysis of FPGA-based NPP I&Cs. This paper summarizes our experience on operating reliability analysis of FPGA based NPP I&Cs.


PLoS ONE ◽  
2011 ◽  
Vol 6 (8) ◽  
pp. e23196 ◽  
Author(s):  
Aesun Shin ◽  
Jungnam Joo ◽  
Jeongin Bak ◽  
Hye-Ryung Yang ◽  
Jeongseon Kim ◽  
...  

Author(s):  
Lukman Irshad ◽  
Salman Ahmed ◽  
Onan Demirel ◽  
Irem Y. Tumer

Detection of potential failures and human error and their propagation over time at an early design stage will help prevent system failures and adverse accidents. Hence, there is a need for a failure analysis technique that will assess potential functional/component failures, human errors, and how they propagate to affect the system overall. Prior work has introduced FFIP (Functional Failure Identification and Propagation), which considers both human error and mechanical failures and their propagation at a system level at early design stages. However, it fails to consider the specific human actions (expected or unexpected) that contributed towards the human error. In this paper, we propose a method to expand FFIP to include human action/error propagation during failure analysis so a designer can address the human errors using human factors engineering principals at early design stages. To explore the capabilities of the proposed method, it is applied to a hold-up tank example and the results are coupled with Digital Human Modeling to demonstrate how designers can use these tools to make better design decisions before any design commitments are made.


2021 ◽  
Vol 11 (18) ◽  
pp. 8379
Author(s):  
Seongmin Kim

A recent innovation in the trusted execution environment (TEE) technologies enables the delegation of privacy-preserving computation to the cloud system. In particular, Intel SGX, an extension of x86 instruction set architecture (ISA), accelerates this trend by offering hardware-protected isolation with near-native performance. However, SGX inherently suffers from performance degradation depending on the workload characteristics due to the hardware restriction and design decisions that primarily concern the security guarantee. The system-level optimizations on SGX runtime and kernel module have been proposed to resolve this, but they cannot effectively reflect application-specific characteristics that largely impact the performance of legacy SGX applications. This work presents an optimization strategy to achieve application-level optimization by utilizing asynchronous switchless calls to reduce enclave transition, one of the dominant overheads of using SGX. Based on the systematic analysis, our methodology examines the performance benefit for each enclave transition wrapper and selectively applies switchless calls without modifying the legacy codebases. The evaluation shows that our optimization strategy successfully improves the end-to-end performance of our showcasing application, an SGX-enabled network middlebox.


2004 ◽  
Vol 10 (2) ◽  
pp. 107-112 ◽  
Author(s):  
Romualdas Kliukas ◽  
Antanas Kudzys

An effect of service and proof actions on probabilistic reliability (serviceability, safety and durability) of building elements (components and members) of existing enclosure and bearing structures is under consideration. Time‐dependent models for reliability assessment of elements under sustained variable and multicycle actions are presented. Revised reliability indices of existing elements exposed to service permanent and variable actions are discussed. It is recommended to assess the long‐term reliability index of elements taking into account the effect of latent defects. Truncated probability distributions of physical‐mechanical resistances of elements and an effect of their latent defects on reliability index assessment are taken into account. Methodological peculiarities of durability prediction of elements and avoiding unfounded premature repairs or replacements are analysed. The applied illustration of the presented method on the probabilistic reliability prediction of deteriorating concrete covers is demonstrated.


Author(s):  
David C. Jensen ◽  
Irem Y. Tumer ◽  
Tolga Kurtoglu

Software-driven hardware configurations account for the majority of modern complex systems. The often costly failures of such systems can be attributed to software specific, hardware specific, or software/hardware interaction failures. The understanding of the propagation of failures in a complex system is critical because, while a software component may not fail in terms of loss of function, a software operational state can cause an associated hardware failure. The least expensive phase of the product life cycle to address failures is during the design stage. This results in a need to evaluate how a combined software/hardware system behaves and how failures propagate from a design stage analysis framework. Historical approaches to modeling the reliability of these systems have analyzed the software and hardware components separately. As a result significant work has been done to model and analyze the reliability of either component individually. Research into interfacing failures between hardware and software has been largely on the software side in modeling the behavior of software operating on failed hardware. This paper proposes the use of high-level system modeling approaches to model failure propagation in combined software/hardware system. Specifically, this paper presents the use of the Function-Failure Identification and Propagation (FFIP) framework for system level analysis. This framework is applied to evaluate nonlinear failure propagation within the Reaction Control System Jet Selection of the NASA space shuttle, specifically, for the redundancy management system. The redundancy management software is a subset of the larger data processing software and is involved in jet selection, warning systems, and pilot control. The software component that monitors for leaks does so by evaluating temperature data from the fuel and oxidizer injectors and flags a jet as having a failure by leak if the temperature data is out of bounds for three or more cycles. The end goal is to identify the most likely and highest cost paths for fault propagation in a complex system as an effective way to enhance the reliability of a system. Through the defining of functional failure propagation modes and path evaluation, a complex system designer can evaluate the effectiveness of system monitors and comparing design configurations.


2013 ◽  
Vol 718-720 ◽  
pp. 1268-1273
Author(s):  
Kai Wang ◽  
Xu Ping ◽  
Liu Yan ◽  
Li Geng

Substation is the key component of regional power supply. The assessment of substation reliability index should be done before the assessment of regional power station reliability. Taking regional power station for example, this paper uses algorithm of minimum cutest which is based on spaces of electrical parts, gives analysis to the reliability of power station main electrical wiring. This method, being verified to provide an effective reliability assessment for power station main electrical wiring, lays a foundation for assessment of regional power station reliability.


Sign in / Sign up

Export Citation Format

Share Document