Volume 9: Transportation Systems; Safety Engineering, Risk Analysis and Reliability Methods; Applied Stochastic Optimization, Uncertainty and Probability
Latest Publications


TOTAL DOCUMENTS

93
(FIVE YEARS 0)

H-INDEX

4
(FIVE YEARS 0)

Published By ASMEDC

9780791854952

Author(s):  
Juan C. Ramirez ◽  
Suzanne A. Smyth ◽  
Russell A. Ogle

A boiling liquid, expanding vapor explosion (BLEVE) occurs when a pressure vessel containing a superheated liquid undergoes a catastrophic failure, resulting in a violent vaporization of the liquid. The exposure of a pressure vessel to a fire is a classic scenario that can result in a BLEVE. The thermomechanical exergy of a pressure vessel’s contents provides — by definition — an upper bound on the work that can be performed by the system during the explosion. By fixing the values of ambient pressure and temperature (i.e., the dead state), exergy can be interpreted as another thermodynamic property. This rigorous and unambiguous definition makes it ideal to estimate the maximum energy of explosions. The numerical value of exergy depends on the definition of the dead state. In this paper we examine the effect of different definitions for the dead state on the explosion energy value. We consider two applications of this method: the contribution of the vapor head-space to the explosive energy as a function of the fractional liquid fill of the vessel, and the effect of the vessel burst pressure.


Author(s):  
John Martens ◽  
Mark Fecke ◽  
Justin Bishop ◽  
Russ Ogle

Functional testing is an essential part of the process of developing trust in safety-critical control systems. A typical life cycle for a control system begins with a functional specification, which defines the system functionality. An important step in the design-to-commissioning process is the on-line functional testing that typically precedes release for operation. The functional testing is usually the last step in verifying operation and validating the design of the control system with respect to the functional description. Functional testing can often be the last chance to catch costly mistakes that may result from a system performing in unexpected ways. Many aspects of functional testing need careful consideration, including identifying hazards that the system is to guard against, developing tests to validate the control system response to the potential hazards, and performing the functional tests. This paper includes several case studies that highlight incidents where the functional testing has caught flaws in the control system that could have lead to catastrophic failures. Additional case studies where functional tests were not completed and catastrophic failures did occur are discussed and the lack of functional testing in those cases is examined. A simple methodology for selecting control loops that may benefit from functional testing is presented and useful guidance documents are identified.


Author(s):  
K. Pugazhendhi ◽  
A. K. Dhingra

In recent years quasi Monte-Carlo (QMC) techniques are gaining more popularity for reliability evaluation because of their increased accuracy over traditional Monte-Carlo simulation. A QMC technique like Low Discrepancy Sequence (LDS) combined with importance sampling is shown to be more accurate and robust in the past for the evaluation of structural reliability. However, one of the challenges in using importance sampling techniques to evaluate the structural reliability is to identify the optimum sampling density. In this article, a novel technique based on a combination of cross entropy and low discrepancy sampling methods is used for the evaluation of structural reliability. The proposed technique does not require an apriori knowledge of Most Probable Point of failure (MPP), and succeeds in adaptively identifying the optimum sampling density for the structural reliability evaluation. Several benchmark examples verify that the proposed method is as accurate as the quasi Monte-Carlo technique using low discrepancy sequence with the added advantage of being able to accomplish this without a knowledge of the MPP.


Author(s):  
Mark W. Arndt ◽  
Stephen M. Arndt ◽  
Donald Stevens

A study of numerous published rollover tests was conducted by reexamination of the original works, analysis of their data, and centralized compilation of their results. Instances were identified where the original reported results for trip speed were in error, requiring revision because the analysis technique employed extrapolation versus integration and lacked correction for offset errors that develop by placing the Global Positioning System (GPS) antenna away from the vehicle Center of Gravity (CG). An analysis was performed demonstrating revised results. In total, 81 dolly rollover crash tests, 24 naturally occurring rollover crash tests, and 102 reconstructed rollovers were identified. Of the 24 naturally occurring tests, 18 were steer-induced rollover tests. Distributions of the rollover drag factors are presented. The range of drag factors for all examined dolly rollovers was 0.38 g to 0.50 g with the upper and lower 15 percent statistically trimmed. The average drag factor for dolly rollovers was 0.44 g (standard deviation = 0.064) with a reported minimum of 0.31 g and a reported maximum of 0.61 g. After revisions, the range of drag factors for the set of naturally occurring rollovers was 0.39 g to 0.50 g with the upper and lower 15 percent statistically trimmed. The average drag factor for naturally occurring rollovers was 0.44 g (standard deviation = 0.063) with a reported minimum of 0.33 g and a reported maximum of 0.57 g. These results provide a more probable range of the drag factor for use in accident reconstruction compared to the often repeated assertion that rollover drag factors range between 0.4 g and 0.65 g.


Author(s):  
Kerry D. Parrott ◽  
Pat J. Mattes ◽  
Douglas R. Stahl

This paper proposes that the advanced Failure Modes and Effects Analysis (FMEA) techniques and methodology currently used by the automotive industry for product and process design can be reversed and used as an effective failure/root cause analysis tool. This paper will review FMEA methodologies, explain the newest advanced FMEA methodologies that are now being used in the automotive industry, and will then explain how this methodology can be effectively reversed and used as a failure analysis and fire cause determination tool referred to as a “reverse FMEA” (rFMEA). This paper will address the application of these techniques and methodology to vehicle fire cause determination. This methodology is particularly suited to situations where multiple potential fire causes are contained within an established area of origin. NFPA 921 Guide for Fire & Explosion Investigations [1] and NFPA 1033 Standard for Professional Qualifications for Fire Investigator [2], often referenced by the fire investigation community, prescribe following a systematic approach utilizing the scientific method for fire origin and cause determinations. The rFMEA methodology is proposed as a fire investigation tool that assists in that process. This “reverse FMEA” methodology will then be applied to a hypothetical, illustrative case study to demonstrate its application.


Author(s):  
Stephen A. Batzer ◽  
John S. Morse

“But, who cares, it’s done, end of story, [we] will probably be fine and we’ll get a good cement job.” This is an oft-repeated email quotation from one BP engineer to another on April 16, 2010, just four days before the Macondo well blew out in the Gulf of Mexico. Although these two men survived, 11 others did not. The well blowout also brought with it poisoning of the ecology and vast financial loss. This quote, part of a discussion about centralizers for the well (BP ended up with just six instead of the planned 16 or 21), seems to epitomize the attitude regarding a series of decisions made about the well’s design. The product of the decisions was complete loss and worse. However, the parties did not seem to be aware of the importance of their individual decisions or their consequences as they were making them. This disaster, like many others, seemed in retrospect to unfold in slow motion, and the players involved did not perceive the sheer cliff before them until they had transgressed its edge. This paper will examine decision-making processes in the Deepwater Horizon blowout and a series of other disasters, both high and low profile events. All of these preventable events stemmed from decision-making failures. These failures include disregarding existing information, failing to soberly extrapolate “what if?” when existing information contained uncertainty, failing to obtain vital missing information, failing to question decisions — particularly from those considered authoritative, and a cavalier attitude regarding rules because probably nothing will happen anyway. “Who cares?”


Author(s):  
Stephen A. Batzer ◽  
John S. Morse ◽  
Dong Y. Don Lee

The enduring issues regarding codes and standards for consumer products and corporate behavior are discussed in this paper. It has been frequently asserted that the adherence of a product to a recognized government or private standard ensures that the product has a minimal level of safety, and that said product is therefore presumably non-defective. The agencies which promulgate these codes and standards are ostensibly impartial and informed, and have the public’s best interests in mind. This conviction is undoubtedly true in some instances, but is also unquestionably false in others. The issues regarding codes and standards and their impact upon products and the trusting public include, but are not limited to, asymmetric information, cost concerns, ethics, foreseeable misuses, non-alignment of interests, and technological advancements after the standards were adopted. In short, the adherence to the letter, rather than the spirit, of individual codes and standards is a manifestation of the Principal-Agent conflict, in which the agent, acting on behalf of the principal, has a different set of incentives than does the principal. This conflict and the underlying issues listed above are discussed. Case studies of numerous products with possible, known, and unforeseen adverse impacts upon public health and safety will be used as illustrations of products that were within the letter of the code or standard, but manifestly defective.


Author(s):  
Vesna Jaksic ◽  
Vikram Pakrashi ◽  
Alan O’Connor

Damage detection and Structural Health Monitoring (SHM) for bridges employing bridge-vehicle interaction has created considerable interest in recent times. In this regard, a significant amount of work is present on the bridge-vehicle interaction models and on damage models. Surface roughness on bridges is typically used for detailing models and analyses are present relating surface roughness to the dynamic amplification of response of the bridge, the vehicle or to the ride quality. This paper presents the potential of using surface roughness for damage detection of bridge structures through bridge-vehicle interaction. The concept is introduced by considering a single point observation of the interaction of an Euler-Bernoulli beam with a breathing crack traversed by a point load. The breathing crack is treated as a nonlinear system with bilinear stiffness characteristics related to the opening and closing of crack. A uniform degradation of flexural rigidity of an Euler-Bernoulli beam traversed by a point load is also considered in this regard. The surface roughness of the beam is essentially a spatial representation of some spectral definition and is treated as a broadband white noise in this paper. The mean removed residuals of beam response are analyzed to estimate damage extent. Uniform velocity and acceleration conditions of the traversing load are investigated for the appropriateness of use. The detection and calibration of damage is investigated through cumulant based statistical parameters computed on stochastic, normalized responses of the damaged beam due to passages of the load. Possibilities of damage detection and calibration under benchmarked and non-benchmarked cases are discussed. Practicalities behind implementing this concept are also considered.


Author(s):  
M. Mongiardini ◽  
J. D. Reid

Numerical simulations allow engineers in roadside safety to investigate the safety of retrofit designs minimizing or, in some cases, avoiding the high costs related to the execution of full-scale experimental tests. This paper describes the numerical investigation made to assess the performance of a roadside safety barrier when relocated behind the break point of a 3H:1V slope, found on a Mechanically Stabilized Earth (MSE) system. A safe barrier relocation in the slope would allow reducing the installation width of the MSE system by an equivalent amount, thus decreasing the overall construction costs. The dynamics of a pick-up truck impacting the relocated barrier and the system deformation were simulated in detail using the explicit non-linear dynamic finite element code LS-DYNA. The model was initially calibrated and subsequently validated against results from a previous full-scale crash test with the barrier placed at the slope break point. After a sensitivity analysis regarding the role of suspension failure and tire deflation on the vehicle stability, the system performance was assessed when it was relocated into the slope. Two different configurations were considered, differing for the height of the rail respect to the road surface and the corresponding post embedment into the soil. Conclusions and recommendations were drawn based on the results obtained from the numerical analysis.


Author(s):  
X. Jin ◽  
P. Woytowitz ◽  
T. Tan

The reliability performance of Semiconductor Manufacturing Equipments (SME) is very important for both equipment manufacturers and customers. However, the response variables are random in nature and can significantly change due to many factors. In order to track the equipment reliability performance with certain confidence, this paper proposes an efficient methodology to calculate the number of samples needed to measure the reliability performance of the SME tools. This paper presents a frequency-based Statistics methodology to calculate the number of sampled tools to evaluate the SME reliability field performance based on certain confidence levels and error margins. One example case has been investigated to demonstrate the method. We demonstrate that the multiple weeks accumulated average reliability metrics of multiple tools do not equal the average of the multiple weeks accumulated average reliability metrics of these tools. We show how the number of required sampled tools increases when the reliability performance is improved and quantify the larger number of sampled tools required when a tighter margin of error or higher confidence level is needed.


Sign in / Sign up

Export Citation Format

Share Document