scholarly journals A Screening Methodology for the Identification of Critical Units in Major-Hazard Facilities Under Seismic Loading

2021 ◽  
Vol 7 ◽  
Author(s):  
Daniele Corritore ◽  
Fabrizio Paolacci ◽  
Stefano Caprinozzi

The complexity of process industry and the consequences that Na-Tech events could produce in terms of damage to equipment, release of dangerous substances (flammable, toxic, or explosive), and environmental consequences have prompted the scientific community to focus on the development of efficient methodologies for Quantitative Seismic Risk Analysis (QsRA) of process plants. Several analytical and numerical methods have been proposed and validated through representative case studies. Nevertheless, the complexity of this matter makes their applicability difficult, especially when a rapid identification of the critical components of a plant is required, which may induce hazardous material release and thus severe consequences for the environment and the community. Accordingly, in this paper, a screening methodology is proposed for rapid identification of the most critical components of a major-hazard plant under seismic loading. It is based on a closed-form assessment of the probability of damage for all components, derived by using analytical representations of the seismic hazard curve and the fragility functions of the equipment involved. For this purpose, fragility curves currently available in the literature or derived by using low-fidelity models could be used for simplicity, whereas the parameters of the seismic hazard curve are estimated based on the regional seismicity. The representative damage states (DS) for each equipment typology are selected based on specific damage states/loss of containment (DS/LOC) matrices, which are used to individuate the most probable LOC events. The risk is then assessed based on the potential consequences of a LOC event, using a classical consequence analysis, typically adopted in risk analysis of hazardous plants. For this purpose, specific probability classes will be used. Finally, by associating the Probability Class Index (PI) with Consequence Index (CI), a Global Risk Index (GRI) is derived, which provides the severity of the scenario. This allows us to build a ranking of the most hazardous components of a process plant by using a proper risk matrix. The applicability of the method is shown through a representative case study.

Author(s):  
Fatemeh Jalayer ◽  
Hossein Ebrahimian ◽  
Andrea Miano

AbstractThe Italian code requires spectrum compatibility with mean spectrum for a suite of accelerograms selected for time-history analysis. Although these requirements define minimum acceptability criteria, it is likely that code-based non-linear dynamic analysis is going to be done based on limited number of records. Performance-based safety-checking provides formal basis for addressing the record-to-record variability and the epistemic uncertainties due to limited number of records and in the estimation of the seismic hazard curve. “Cloud Analysis” is a non-linear time-history analysis procedure that employs the structural response to un-scaled ground motion records and can be directly implemented in performance-based safety-checking. This paper interprets the code-based provisions in a performance-based key and applies further restrictions to spectrum-compatible record selection aiming to implement Cloud Analysis. It is shown that, by multiplying a closed-form coefficient, code-based safety ratio could be transformed into simplified performance-based safety ratio. It is shown that, as a proof of concept, if the partial safety factors in the code are set to unity, this coefficient is going to be on average slightly larger than unity. The paper provides the basis for propagating the epistemic uncertainties due to limited sample size and in the seismic hazard curve to the performance-based safety ratio both in a rigorous and simplified manner. If epistemic uncertainties are considered, the average code-based safety checking could end up being unconservative with respect to performance-based procedures when the number of records is small. However, it is shown that performance-based safety checking is possible with no extra structural analyses.


2011 ◽  
Vol 38 (3) ◽  
pp. 293-304 ◽  
Author(s):  
Elena Nuta ◽  
Constantin Christopoulos ◽  
Jeffrey A. Packer

The seismic response of tubular steel wind turbine towers is of significant concern as they are increasingly being installed in seismic areas and design codes do not clearly address this aspect of design. The seismic hazard is hence assessed for the Canadian seismic environment using implicit finite element analysis and incremental dynamic analysis of a 1.65 MW wind turbine tower. Its behaviour under seismic excitation is evaluated, damage states are defined, and a framework is developed for determining the probability of damage of the tower at varying seismic hazard levels. Results of the implementation of this framework in two Canadian locations are presented herein, where the risk was found to be low for the seismic hazard level prescribed for buildings. However, the design of wind turbine towers is subject to change, and the design spectrum is highly uncertain. Thus, a methodology is outlined to thoroughly investigate the probability of reaching predetermined damage states under any seismic loading conditions for future considerations.


2020 ◽  
Author(s):  
Giorgio Andrea Alleanza ◽  
Filomena de Silva ◽  
Anna d'Onofrio ◽  
Francesco Gargiulo ◽  
Francesco Silvestri

<p>Semi-empirical procedures for evaluating liquefaction potential (e.g. Seed & Idriss, 1971) require the estimation of cyclic resistance ratio (CRR) and cyclic shear stress ratio (CSR). The first can be obtained using empirical relationships based on in situ tests (e.g. CPT, SPT), the latter can be expressed as function of the maximum horizontal acceleration at ground surface (a<sub>max</sub>), total and effective vertical stresses at the depth of interest (σ<sub>v0</sub>, σ’<sub>v0</sub>) and a magnitude-dependent stress reduction coefficient (r<sub>d</sub>) that accounts for the deformability of the soil column (Idriss & Boulanger, 2004). All these methods were developed referring to a moment magnitude (M<sub>w</sub>) equal to 7.5 and therefore require a magnitude scale factor (MSF) to make them suitable for different magnitude values. Usually, MSF and r<sub>d</sub> are computed with reference to the mean or modal value of M<sub>w</sub> taken from a disaggregation analysis, while a<sub>max</sub> is obtained from a seismic hazard curve, including the contribution of various combinations of magnitudes and distances (Kramer & Mayfield, 2005). Thus, there might be inconsistency between the magnitude values used to evaluate either MSF or r<sub>d</sub> and a<sub>max</sub>. To overcome this problem, Idriss (1985) suggests to directly introduce the MSF in the probabilistic hazard analysis of the seismic acceleration. In this contribution, an alternative method is proposed, by properly modifying the acceleration seismic hazard curve conventionally adopted by the code of practice on the basis the disaggregation analysis, so that i) the contribution of the different magnitudes and the associated MSF and r<sub>d</sub>-values are considered, ii) the computational effort is reduced since a CSR-hazard curve is straightforward obtained. This alternative method is used to carry out a simplified liquefaction assessment of a sand deposit located in the municipality of Casamicciola Terme (Naples, Italy), where the results of SPT tests are available from recent seismic microzonation studies. The CSR computed using the proposed procedure is lower than that obtained adopting the classical method suggested by Idriss & Boulanger (2004). This can be explained considering that the suggested method takes into account all the magnitudes that contribute to the definition of the seismic hazard, instead of considering the mean or modal value of the disaggregation analysis. Such an accurate prediction of the seismic demand may represent a basis for more reliable seismic microzonation maps for liquefaction and for a less conservative design of liquefaction risk mitigation measures.</p><p>References</p><p>Idriss, I.M. (1985). Evaluation of seismic risk in engineering practice, Proc. 11th Int. Conf. on Soil Mech. and Found. Engrg, 1, 255-320.</p><p>Idriss, I.M., Boulanger, R. W. (2004). Semi-Empirical Procedures for Evaluating Liquefaction Potential During Earthquakes, Proceedings of the 11th ICSDEE & 3rd ICEGE, (Doolin et al. Eds.), Berkeley, CA, USA, 1, 32-56.</p><p>Kramer, S.L., Mayfield, R.T. (2005) Performance-based Liquefaction Hazard Evaluation, Proceedings of the Geo-Frontiers Congress, January 24-26, Austin, Texas, USA.</p><p>Seed H.B., Idriss M. (1971). Simplified procedure for evaluating soil liquefaction potential, J. Soil Mech. Found. Div., 97, 1249-1273.</p>


2010 ◽  
Vol 163-167 ◽  
pp. 3443-3447
Author(s):  
Yu Hong Ma ◽  
Gui Feng Zhao ◽  
Jie Cui ◽  
Ping Tan

At present, seismic strengthening design reference period of the existing building is usually equal to 50 years in China, sometime this is uneconomic and unreasonable. In this paper, determining principle of seismic strengthening design reference period for the existing building with different importance is presented. The seismic strengthening design level of the existing building is put forward. After the shape factor of intensity probability distribution function is used to represent the seismic hazard characteristic of different areas, the seismic hazard curve formula of design acceleration Amax and earthquake influence coefficient αmax are deduced according to the seismic hazard curve of intensity. The seismic strengthening design ground motion parameter for the existing building with different importance is researched in detail by use of hazard curve formula of seismic ground motion parameter based on seismic hazard characteristic zone. At last, the method and the calculation step are explained by a calculation example. The result shows that for the existing building with different design reference period, using same design parameter is unreasonable in different seismic hazard characteristic zone, and the method is more scientific than the code method.


2012 ◽  
Vol 22 (2) ◽  
pp. 95-103
Author(s):  
Ante Bukša ◽  
Ivica Šegulja ◽  
Vinko Tomas

By adjusting the maintenance approach towards the significant components of ship’s engines and equipment, through the use of operational data from the ship machinery’s daily reports, higher operability and navigation safety can be achieved. The proposed maintenance adjustment model consists of an operation data analysis and risk analysis. The risk analysis comprises the definition of the upper and the lower risk criterion, as well as the definition of a risk index. If the risk index is higher than the lower risk criterion, the component is significant, while it is not significant and has an acceptable risk index if the risk index is lower than the lower risk criterion. For each significant component with a risk index found to be “unacceptable” or “undesirable”, an efficient maintenance policy needs to be adopted. The assessment of the proposed model is based on data regarding the power engine original operation throughout a 13-year period. The results of engine failure examinations reveal that the exhaust valve is the most vulnerable component with the highest rate of failure. For this reason the proposed model of adjusting the maintenance approach has been tested on the exhaust valve sample. It is suggested that the efforts to achieve higher ship operability and navigation safety should go in the direction of periodical adjustments of the maintenance approach i.e. choosing an efficient maintenance policy by reducing the risk indices of the significant engine components. KEY WORDS: maintenance adjustment approach, risk analysis, risk index, lower risk criterion, upper risk criterion, significant components, ship navigation


Author(s):  
Fabrizio Paolacci ◽  
Daniele Corritore ◽  
Antonio C. Caputo ◽  
Oreste S. Bursi ◽  
Bledar Kalemi

The damage states in a storage tank subjected to seismic loading can induce loss of containment (LOC) with possible consequences (fire, explosion, etc..) both for the surrounding units and people. This aspect is particularly crucial for the Quantitative Risk Analysis (QRA) of industrial plants subjected to earthquakes. Classical QRA methodologies are based on standard LOC conditions whose frequency of occurrence is mainly related to technological accident rather than natural events and are thus useless. Therefore, it is evident the necessity of establishing new procedures for the evaluation of the frequencies of occurrence of LOC events in storage tanks when subjected to an earthquake. Consequently, in this work a simple procedure founded on a probabilistic linear regression-based model is proposed, which uses simplified numerical models typically adopted for the seismic response of above ground storage tanks. Based on a set of predetermined LOC events (e.g. damage in the pipes, damage in the nozzles, etc..), whose probabilistic relationship with the local response (stress level, etc..) derives from experimental tests, the probabilistic relationship of selected response parameters with the seismic intensity measure (IM) is established. As result, for each LOC event, the cloud analysis method is used to derive the related fragility curve.


Geosciences ◽  
2018 ◽  
Vol 8 (8) ◽  
pp. 285 ◽  
Author(s):  
Claudia Aristizábal ◽  
Pierre-Yves Bard ◽  
Céline Beauval ◽  
Juan Gómez

The integration of site effects into Probabilistic Seismic Hazard Assessment (PSHA) is still an open issue within the seismic hazard community. Several approaches have been proposed varying from deterministic to fully probabilistic, through hybrid (probabilistic-deterministic) approaches. The present study compares the hazard curves that have been obtained for a thick, soft non-linear site with two different fully probabilistic, site-specific seismic hazard methods: (1) The analytical approximation of the full convolution method (AM) proposed by Bazzurro and Cornell 2004a,b and (2) what we call the Full Probabilistic Stochastic Method (SM). The AM computes the site-specific hazard curve on soil, HC(Sas(f)), by convolving for each oscillator frequency the bedrock hazard curve, HC(Sar(f)), with a simplified representation of the probability distribution of the amplification function, AF(f), at the considered site The SM hazard curve is built from stochastic time histories on soil or rock corresponding to a representative, long enough synthetic catalog of seismic events. This comparison is performed for the example case of the Euroseistest site near Thessaloniki (Greece). For this purpose, we generate a long synthetic earthquake catalog, we calculate synthetic time histories on rock with the stochastic point source approach, and then scale them using an adhoc frequency-dependent correction factor to fit the specific rock target hazard. We then propagate the rock stochastic time histories, from depth to surface using two different one-dimensional (1D) numerical site response analyses, while using an equivalent-linear (EL) and a non-linear (NL) code to account for code-to-code variability. Lastly, we compute the probability distribution of the non-linear site amplification function, AF(f), for both site response analyses, and derive the site-specific hazard curve with both AM and SM methods, to account for method-to-method variability. The code-to-code variability (EL and NL) is found to be significant, providing a much larger contribution to the uncertainty in hazard estimates, than the method-to-method variability: AM and SM results are found comparable whenever simultaneously applicable. However, the AM method is also shown to exhibit severe limitations in the case of strong non-linearity, leading to ground motion “saturation”, so that finally the SM method is to be preferred, despite its much higher computational price. Finally, we encourage the use of ground-motion simulations to integrate site effects into PSHA, since models with different levels of complexity can be included (e.g., point source, extended source, 1D, two-dimensional (2D), and three-dimensional (3D) site response analysis, kappa effect, hard rock …), and the corresponding variability of the site response can be quantified.


2020 ◽  
Vol 6 (41) ◽  
pp. eabc6572
Author(s):  
Owen B. Becette ◽  
Guanghui Zong ◽  
Bin Chen ◽  
Kehinde M. Taiwo ◽  
David A. Case ◽  
...  

RNAs form critical components of biological processes implicated in human diseases, making them attractive for small-molecule therapeutics. Expanding the sites accessible to nuclear magnetic resonance (NMR) spectroscopy will provide atomic-level insights into RNA interactions. Here, we present an efficient strategy to introduce 19F-13C spin pairs into RNA by using a 5-fluorouridine-5′-triphosphate and T7 RNA polymerase–based in vitro transcription. Incorporating the 19F-13C label in two model RNAs produces linewidths that are twice as sharp as the commonly used 1H-13C spin pair. Furthermore, the high sensitivity of the 19F nucleus allows for clear delineation of helical and nonhelical regions as well as GU wobble and Watson-Crick base pairs. Last, the 19F-13C label enables rapid identification of a small-molecule binding pocket within human hepatitis B virus encapsidation signal epsilon (hHBV ε) RNA. We anticipate that the methods described herein will expand the size limitations of RNA NMR and aid with RNA-drug discovery efforts.


Sign in / Sign up

Export Citation Format

Share Document