Failure Evaluation and Analysis of Mechatronics-Based Production Systems during Design Stage Using Structural Modeling

2016 ◽  
Vol 852 ◽  
pp. 799-805 ◽  
Author(s):  
M.K. Loganathan ◽  
Priyom Goswami ◽  
Bedabrat Bhagawati

A method based on structural modelling is developed for failure evaluation and analysis of mechatronics-based production systems. Majority of the elements in production systems are mechatronics-based, which includes various elements such as; electrical, electronic and mechanical. Each of these may have different failure types that may be interdependence/interactive. The reliability of the system mainly depends on how well the failures are taken care of during design stage. In general, individual failures are generalized into probable failure modes and early identification of these helps to reduce their probability. However, consideration of failures and their interdependence / interactions will help to evaluate and analyse the failures of complicated systems in an efficient and effective manner and increase the inherent system reliability. The system structure modeling helps in this regard. Digraph model, in conjunction with matrix method, is employed for failure evaluation and analysis of a mechatronics-based production system based on its structure.

Author(s):  
Xiaoping Du

Uncertainty commonly exists in engineering applications, especially in a design process. Quantifying and managing uncertainty is often a core consideration during the design stage. Due to its importance in engineering practices, uncertainty is gradually introduced and taught in a number of engineering courses. Uncertainty topics, however, are still limited and the teaching materials on uncertainty are still currently lacking. This paper focuses on possible topics that could be introduced in various engineering courses, particularly in design courses. The topics cover the following aspects: identify and take actions on potential failure modes, account for system reliability in the early design stage, quantify the effect of uncertainty, and mitigate the effect of uncertainty in latter design stages. This paper also introduces basics of related design methodologies, such as reliability-based design, robust design, and design for six sigma in order for interested educators to develop familiarity of uncertainty. This paper also reports the implementation and experience of uncertainty education at the Missouri University of Science and Technology.


Author(s):  
Zhengwei Hu ◽  
Zhangli Hu ◽  
Xiaoping Du

AbstractSupport vector machine (SVM) methods are widely used for classification and regression analysis. In many engineering applications, only one class of data is available, and then one-class SVM methods are employed. In reliability applications, the one-class data may be failure data since the data are recorded during reliability experiments when only failures occur. Different from the problems handled by existing one-class SVM methods, there is a bias constraint in the SVM model in this work and the constraint comes from the probability of failure estimated from the failure data. In this study, a new one-class SVM regression method is proposed to accommodate the bias constraint. The one class of failure data is maximally separated from a hypersphere whose radius is determined by the known probability of failure. The proposed SVM method generates regression models that directly link the states of failure modes with design variables, and this makes it possible to obtain the joint probability density of all the component states of an engineering system, resulting in a more accurate prediction of system reliability during the design stage. Three examples are given to demonstrate the effectiveness of the new one-class SVM method.


Author(s):  
Jinghong Liang ◽  
Zissimos P. Mourelatos ◽  
Efstratios Nikolaidis

An efficient single-loop approach for series system reliability-based design optimization problems is presented in this paper. The approach enables the optimizer to apportion the system reliability among the failure modes in an optimal way by increasing the reliability of those failure modes whose reliability can be increased at low cost. Furthermore, it identifies the critical failure modes that contribute the most to the overall system reliability. A previously reported methodology uses a sequential optimization and reliability approach. It also uses a linear extrapolation to determine the coordinates of the most probable points of the failure modes as the design changes. As a result, the approach can be slow and may not converge if the location of the most probable failure point changes significantly. This paper proposes an alternative system RBDO approach that overcomes the above difficulties by using a single-loop approach where the searches for the optimum design and for the most probable failure points proceed simultaneously. An easy to implement active set strategy is used. The maximum allowable failure probabilities of the failure modes are considered as design variables. The efficiency and robustness of the method is demonstrated on three design examples involving a beam, an internal combustion engine and a vehicle side impact. The results are compared with deterministic optimization and the conventional component RBDO formulation.


2021 ◽  
pp. 875529302199483
Author(s):  
Eyitayo A Opabola ◽  
Kenneth J Elwood

Existing reinforced concrete (RC) columns with short splices in older-type frame structures are prone to either a shear or bond mechanism. Experimental results have shown that the force–displacement response of columns exhibiting these failure modes are different from flexure-critical columns and typically have lower deformation capacity. This article presents a failure mode-based approach for seismic assessment of RC columns with short splices. In this approach, first, the probable failure mode of the component is evaluated. Subsequently, based on the failure mode, the force–displacement response of the component can be predicted. In this article, recommendations are proposed for evaluating the probable failure mode, elastic rotation, drift at lateral failure, and drift at axial failure for columns with short splices experiencing shear, flexure, or bond failures.


Author(s):  
Eugene Babeshko ◽  
Ievgenii Bakhmach ◽  
Vyacheslav Kharchenko ◽  
Eugene Ruchkov ◽  
Oleksandr Siora

Operating reliability assessment of instrumentation and control systems (I&Cs) is always one of the most important activities, especially for critical domains like nuclear power plants (NPPs). Intensive use of relatively new technologies like field programmable gate arrays (FPGAs) in I&C which appear in upgrades and in newly built NPPs makes task to develop and validate advanced operating reliability assessment methods that consider specific technology features very topical. Increased integration densities make the reliability of integrated circuits the most crucial point in modern NPP I&Cs. Moreover, FPGAs differ in some significant ways from other integrated circuits: they are shipped as blanks and are very dependent on design configured into them. Furthermore, FPGA design could be changed during planned NPP outage for different reasons. Considering all possible failure modes of FPGA-based NPP I&C at design stage is a quite challenging task. Therefore, operating reliability assessment is one of the most preferable ways to perform comprehensive analysis of FPGA-based NPP I&Cs. This paper summarizes our experience on operating reliability analysis of FPGA based NPP I&Cs.


Author(s):  
Michael Devin ◽  
Bryony DuPont ◽  
Spencer Hallowell ◽  
Sanjay Arwade

Abstract Commercial floating offshore wind projects are expected to emerge in the United States by the end of this decade. Currently, however, high costs for the technology limit its commercial viability, and a lack of data regarding system reliability heightens project risk. This work presents an optimization algorithm to examine the trade-offs between cost and reliability for a floating offshore wind array that uses shared anchoring. Combining a multivariable genetic algorithm with elements of Bayesian optimization, the optimization algorithm selectively increases anchor strengths to minimize the added costs of failure for a large floating wind farm in the Gulf of Maine under survival load conditions. The algorithm uses an evaluation function that computes the probability of mooring system failure, then calculates the expected maintenance costs of a failure via a Monte Carlo method. A cost sensitivity analysis is also performed to compare results for a range of maintenance cost profiles. The results indicate that virtually all of the farm's anchors are strengthened in the minimum cost solution. Anchor strength is in- creased between 5-35% depending on farm location, with anchor strength nearest the export cable being increased the most. The optimal solutions maintain a failure probability of 1.25%, demonstrating the trade-off point between cost and reliability. System reliability was found to be particularly sensitive to changes in turbine costs and downtime, suggest- ing further research into floating offshore wind turbine failure modes in extreme loading conditions could be particularly impactful in reducing project uncertainty.


2019 ◽  
Vol 37 (2) ◽  
pp. 189-206
Author(s):  
Yingsai Cao ◽  
Sifeng Liu ◽  
Zhigeng Fang

Purpose The purpose of this paper is to propose new importance measures for degrading components based on Shapley value, which can provide answers about how important players are to the whole cooperative game and what payoff each player can reasonably expect. Design/methodology/approach The proposed importance measure characterizes how a specific degrading component contributes to the degradation of system reliability by using Shapley value. Degradation models are also introduced to assess the reliability of degrading components. The reliability of system consisting independent degrading components is obtained by using structure functions, while reliability of system comprising correlated degrading components is evaluated with a multivariate distribution. Findings The ranking of degrading components according to the newly developed importance measure depends on the degradation parameters of components, system structure and parameters characterizing the association of components. Originality/value Considering the fact that reliability degradation of engineering systems and equipment are often attributed to the degradation of a particular or set of components that are characterized by degrading features. This paper proposes new importance measures for degrading components based on Shapley value to reflect the responsibility of each degrading component for the deterioration of system reliability. The results are also able to give timely feedback of the expected contribution of each degrading component to system reliability degradation.


Author(s):  
Grant McSorley ◽  
Greg Huet ◽  
Stephen J. Culley ◽  
Clement Fortin

Due to their increasing responsibility for the total lifecycle costs associated with their products, manufacturers are investing increasingly more efforts in their reduction. One way in which this can be achieved is through the elimination at the design stage of possible in-service issues. This can be supported through the feedback of product in-use information obtained from testing, prototyping and in-service lifecycle stages towards the earlier stages of the development process. In order to facilitate the feedback of this information to design, the idea of complimentary product structures is introduced. The relationships between these structures provide a link between product information across the various lifecycle stages. The similarities between the product structure and the FMEA structure are also examined. As the FMEA organizes its information on a component basis, it is suggested that it provides an adequate basis for the organization of the product in-use information in order to facilitate its association with the product structure. Based on these ideas, a full framework for the feedback and reuse of product in-use information is described.


Author(s):  
Anusha Krishna Murthy ◽  
Saikath Bhattacharya ◽  
Lance Fiondella

Most reliability models assume that components and systems experience one failure mode. Several systems such as hardware, however, are prone to more than one mode of failure. Past two-failure mode research derives equations to maximize reliability or minimize cost by identifying the optimal number of components. However, many if not all of these equations are derived from models that make the simplifying assumption that components fail in a statistically independent manner. In this paper, models to assess the impact of correlation on two-failure mode system reliability and cost are developed and corresponding expressions for reliability and cost optimal designs derived. Our illustrations demonstrate that, despite correlation, the approach identifies reliability and cost optimal designs.


Sign in / Sign up

Export Citation Format

Share Document