The dynamic response of coseismic liquefaction-induced ruptures associated with the 2019 Mw 5.8 Mirpur, Pakistan, earthquake using HVSR measurements

2021 ◽  
Vol 40 (8) ◽  
pp. 590-600
Author(s):  
Muhammad Younis Khan ◽  
Syed Ali Turab ◽  
Liaqat Ali ◽  
Muhammad Tahir Shah ◽  
S. M. Talha Qadri ◽  
...  

The Mirpur area of Pakistan was severely damaged by extensive coseismic liquefaction following an earthquake of Mw 5.8 on 24 September 2019. Villages within 6 km of the epicenter were adversely affected due to extensive coseismic liquefaction-induced surface and shallow subsurface deformations. The earthquake affected all types of buildings and key infrastructure (e.g., the Upper Jhelum Canal and the main Jhelum–Jatlan road). Field observations and associated effects are presented, including horizontal-to-vertical spectral ratio (HVSR) data sets acquired from three sites to evaluate the site response characteristics of the liquefaction-affected soil profiles. As a result, rupture events strongly influenced spectral features (amplitude and frequency) and site-specific 1D shear-wave velocity profiles at sites S1 and S2. The dynamic behavior of HVSRs across ruptures at sites S1 and S2 corresponds to varied levels of seismic amplification, demonstrating the impact of liquefaction-induced ruptures of seismic origin on the site response that have not been reported previously in the literature. The consistent HVSR pattern of well-established high-frequency peaks at site S3 adjacent to partially damaged to completely collapsed buildings of different types further indicates the susceptibility of potential liquefaction hazard. These results agree with the surface liquefaction signatures in the field, revealed by inverted electrical resistivity tomography models in terms of liquified sand plugs, clay lenses and associated fractures, and increasing trends of radon concentration in the soil with decrease in the distance toward ruptures. Additionally, the successful application of HVSR as a cost-effective and speedy tool attests to the potential of the proposed approach in furnishing complementary information for better assessment of liquefaction hazards in the developing world, where financial constraints are a major issue. This can help with seismic hazard analysis and mitigation in the Mirpur area and may have applications in other seismically active regions of the world.

2020 ◽  
Vol 222 (3) ◽  
pp. 2053-2067 ◽  
Author(s):  
Giovanni Lanzano ◽  
Chiara Felicetta ◽  
Francesca Pacor ◽  
Daniele Spallarossa ◽  
Paola Traversa

SUMMARY To evaluate the site response using both empirical approaches (e.g. standard spectral ratio, ground motion models (GMMs), generalized inversion techniques, etc.) and numerical 1-D/2-D analyses, the definition of the reference motion, that is the ground motion recorded at stations unaffected by site-effects due to topographic, stratigraphic or basin effects, is needed. The main objective of this work is to define a robust strategy to identify the seismic stations that can be considered as reference rock sites, using six proxies for the site response: three proxies are related to the analysis of geophysical and seismological data (the repeatable site term from the residual analysis, the resonance frequencies from horizontal-to-vertical spectral ratios on noise or earthquake signals, the average shear wave velocity in the first 30 m); the remaining ones concern geomorphological and installation features (outcropping rocks or stiff soils, flat topography and absence of interaction with structures). We introduce a weighting scheme to take into account the availability and the quality of the site information, as well as the fulfillment of the criterion associated to each proxy. We also introduce a hierarchical index, to take into account the relevance of the proposed proxies in the description of the site effects, and an acceptance threshold for reference rock sites identification. The procedure is applied on a very large data set, composed by accelerometric and velocimetric waveforms, recorded in Central Italy in the period 2008–2018. This data set is composed by more than 30 000 waveforms relative to 450 earthquakes in the magnitude range 3.2–6.5 and recorded by more than 450 stations. A total of 36 out of 133 candidate stations are identified as reference sites: the majority of them are installed on rock with flat topography, but this condition is not sufficient to guarantee the absence of amplifications, especially at high frequencies. Seismological analyses are necessary to exclude stations affected by resonances. We test the impact of using these sites by calibrating a GMMs. The results show that for reference rock sites the median predictions are reduced down to about 45 per cent at short periods in comparison to the generic rock motions.


2020 ◽  
pp. 875529302097097
Author(s):  
Adrian Rodriguez-Marek ◽  
Julian J Bommer ◽  
Robert R Youngs ◽  
Maria J Crespo ◽  
Peter J Stafford ◽  
...  

The incorporation of local amplification factors (AFs) determined through site response analyses has become standard practice in site-specific probabilistic seismic hazard analysis (PSHA). Another indispensable feature of the current state of practice in site-specific PSHA is the identification and quantification of all epistemic uncertainties that influence the final hazard estimates. Consequently, logic trees are constructed not only for seismic source characteristics and ground-motion models (GMMs) but also for the site AFs, the latter generally characterized by branches for alternative shear-wave velocity ( VS) profiles. However, in the same way that branch weights on alternative GMMs can give rise to unintentionally narrow distributions of predicted ground-motion amplitudes, the distribution of AFs obtained from a small number of weighted VS profiles will often be quite narrow at some oscillator frequencies. We propose an alternative approach to capturing epistemic uncertainty in site response in order to avoid such unintentionally constricted distributions of AFs using more complete logic trees for site response analyses. Nodes are included for all the factors that influence the calculated AFs, which may include shallow VS profiles, deeper VS profiles, depth of impedance contrasts, low-strain soil damping, and choice of modulus reduction and damping curves. Site response analyses are then executed for all branch combinations to generate a large number of frequency-dependent AFs. Finally, these are re-sampled as a discrete distribution with enough branches to capture the underlying distribution of AFs. While this approach improves the representation of epistemic uncertainty in the dynamic site response characteristics, modeling uncertainty in the AFs is not automatically captured in this way, for which reason it is also proposed that a minimum level of epistemic uncertainty should be imposed on the final distribution.


2016 ◽  
Vol 48 (2) ◽  
pp. 130-142
Author(s):  
Titi Anggono ◽  
◽  
Syuhada Syuhada ◽  
Nugroho Dwi Hananto ◽  
Lina Handayani ◽  
...  

2016 ◽  
Vol 48 (2) ◽  
pp. 130-142
Author(s):  
Titi Anggono ◽  
◽  
Syuhada Syuhada ◽  
Nugroho Dwi Hananto ◽  
Lina Handayani ◽  
...  

2020 ◽  
Author(s):  
Mohamad Mahdi Hallal ◽  
Brady R. Cox

Many recent studies have shown that we are generally unable to accurately replicate recorded ground motions at most borehole array sites using available subsurface geotechnical information and one-dimensional (1D) ground response analyses (GRAs). When 1D GRAs fail to accurately predict recorded site response, the site is often considered too complex to be effectively modeled as 1D. While 3D numerical GRAs are possible and believed to be more accurate, there is rarely a 3D subsurface model available for these analyses. The lack of affordable and reliable site characterization methods to quantify spatial variability in subsurface conditions, particularly regarding shear wave velocity (Vs) measurements needed for GRAs, has pushed researchers to adopt stochastic approaches, such as Vs randomization and spatially correlated random fields. However, these stochastically generated models require the assumption of generic, or assumed, input parameters, introducing significant uncertainties into the site response predictions. This paper describes a new geostatistical approach that can be used for building pseudo-3D Vs models as a means to rationally account for spatial variability in GRAs, increase model accuracy, and reduce uncertainty. Importantly, it requires only a single measured Vs profile and a number of simple, cost-effective, horizontal-to-vertical spectral ratio (H/V) noise measurements. Using Gaussian geostatistical regression, irregularly sampled estimates of fundamental site frequency from H/V measurements (f0,H/V) are used to generate a uniform grid of f0,H/V across the site with accompanying Vs profiles that have been scaled to match each f0,H/V value, thereby producing a pseudo-3D Vs model. This approach is demonstrated at the Treasure Island and Delaney Park Downhole Array sites (TIDA and DPDA, respectively). While the pseudo-3D Vs models can be used to incorporate spatial variability into 1D, 2D, or 3D GRAs, their implementation in 1D GRAs at TIDA and DPDA is discussed in a companion paper.


2012 ◽  
Vol 82 (3) ◽  
pp. 216-222 ◽  
Author(s):  
Venkatesh Iyengar ◽  
Ibrahim Elmadfa

The food safety security (FSS) concept is perceived as an early warning system for minimizing food safety (FS) breaches, and it functions in conjunction with existing FS measures. Essentially, the function of FS and FSS measures can be visualized in two parts: (i) the FS preventive measures as actions taken at the stem level, and (ii) the FSS interventions as actions taken at the root level, to enhance the impact of the implemented safety steps. In practice, along with FS, FSS also draws its support from (i) legislative directives and regulatory measures for enforcing verifiable, timely, and effective compliance; (ii) measurement systems in place for sustained quality assurance; and (iii) shared responsibility to ensure cohesion among all the stakeholders namely, policy makers, regulators, food producers, processors and distributors, and consumers. However, the functional framework of FSS differs from that of FS by way of: (i) retooling the vulnerable segments of the preventive features of existing FS measures; (ii) fine-tuning response systems to efficiently preempt the FS breaches; (iii) building a long-term nutrient and toxicant surveillance network based on validated measurement systems functioning in real time; (iv) focusing on crisp, clear, and correct communication that resonates among all the stakeholders; and (v) developing inter-disciplinary human resources to meet ever-increasing FS challenges. Important determinants of FSS include: (i) strengthening international dialogue for refining regulatory reforms and addressing emerging risks; (ii) developing innovative and strategic action points for intervention {in addition to Hazard Analysis and Critical Control Points (HACCP) procedures]; and (iii) introducing additional science-based tools such as metrology-based measurement systems.


TAPPI Journal ◽  
2018 ◽  
Vol 17 (09) ◽  
pp. 519-532 ◽  
Author(s):  
Mark Crisp ◽  
Richard Riehle

Polyaminopolyamide-epichlorohydrin (PAE) resins are the predominant commercial products used to manufacture wet-strengthened paper products for grades requiring wet-strength permanence. Since their development in the late 1950s, the first generation (G1) resins have proven to be one of the most cost-effective technologies available to provide wet strength to paper. Throughout the past three decades, regulatory directives and sustainability initiatives from various organizations have driven the development of cleaner and safer PAE resins and paper products. Early efforts in this area focused on improving worker safety and reducing the impact of PAE resins on the environment. These efforts led to the development of resins containing significantly reduced levels of 1,3-dichloro-2-propanol (1,3-DCP) and 3-monochloropropane-1,2-diol (3-MCPD), potentially carcinogenic byproducts formed during the manufacturing process of PAE resins. As the levels of these byproducts decreased, the environmental, health, and safety (EH&S) profile of PAE resins and paper products improved. Recent initiatives from major retailers are focusing on product ingredient transparency and quality, thus encouraging the development of safer product formulations while maintaining performance. PAE resin research over the past 20 years has been directed toward regulatory requirements to improve consumer safety and minimize exposure to potentially carcinogenic materials found in various paper products. One of the best known regulatory requirements is the recommendations of the German Federal Institute for Risk Assessment (BfR), which defines the levels of 1,3-DCP and 3-MCPD that can be extracted by water from various food contact grades of paper. These criteria led to the development of third generation (G3) products that contain very low levels of 1,3-DCP (typically <10 parts per million in the as-received/delivered resin). This paper outlines the PAE resin chemical contributors to adsorbable organic halogens and 3-MCPD in paper and provides recommendations for the use of each PAE resin product generation (G1, G1.5, G2, G2.5, and G3).


Author(s):  
Tochukwu Moses ◽  
David Heesom ◽  
David Oloke ◽  
Martin Crouch

The UK Construction Industry through its Government Construction Strategy has recently been mandated to implement Level 2 Building Information Modelling (BIM) on public sector projects. This move, along with other initiatives is key to driving a requirement for 25% cost reduction (establishing the most cost-effective means) on. Other key deliverables within the strategy include reduction in overall project time, early contractor involvement, improved sustainability and enhanced product quality. Collaboration and integrated project delivery is central to the level 2 implementation strategy yet the key protocols or standards relative to cost within BIM processes is not well defined. As offsite construction becomes more prolific within the UK construction sector, this construction approach coupled with BIM, particularly 5D automated quantification process, and early contractor involvement provides significant opportunities for the sector to meet government targets. Early contractor involvement is supported by both the industry and the successive Governments as a credible means to avoid and manage project risks, encourage innovation and value add, making cost and project time predictable, and improving outcomes. The contractor is seen as an expert in construction and could be counter intuitive to exclude such valuable expertise from the pre-construction phase especially with the BIM intent of äóÖbuild it twiceäó», once virtually and once physically. In particular when offsite construction is used, the contractoräó»s construction expertise should be leveraged for the virtual build in BIM-designed projects to ensure a fully streamlined process. Building in a layer of automated costing through 5D BIM will bring about a more robust method of quantification and can help to deliver the 25% reduction in overall cost of a project. Using a literature review and a case study, this paper will look into the benefits of Early Contractor Involvement (ECI) and the impact of 5D BIM on the offsite construction process.


2011 ◽  
Vol 14 (2) ◽  
Author(s):  
Thomas G Koch

Current estimates of obesity costs ignore the impact of future weight loss and gain, and may either over or underestimate economic consequences of weight loss. In light of this, I construct static and dynamic measures of medical costs associated with body mass index (BMI), to be balanced against the cost of one-time interventions. This study finds that ignoring the implications of weight loss and gain over time overstates the medical-cost savings of such interventions by an order of magnitude. When the relationship between spending and age is allowed to vary, weight-loss attempts appear to be cost-effective starting and ending with middle age. Some interventions recently proven to decrease weight may also be cost-effective.


2018 ◽  
Vol 32 (2) ◽  
pp. 103-119
Author(s):  
Colleen M. Boland ◽  
Chris E. Hogan ◽  
Marilyn F. Johnson

SYNOPSIS Mandatory existence disclosure rules require an organization to disclose a policy's existence, but not its content. We examine policy adoption frequencies in the year immediately after the IRS required mandatory existence disclosure by nonprofits of various governance policies. We also examine adoption frequencies in the year of the subsequent change from mandatory existence disclosure to a disclose-and-explain regime that required supplemental disclosures about the content and implementation of conflict of interest policies. Our results suggest that in areas where there is unclear regulatory authority, mandatory existence disclosure is an effective and low cost regulatory device for encouraging the adoption of policies desired by regulators, provided those policies are cost-effective for regulated firms to implement. In addition, we find that disclose-and-explain regulatory regimes provide stronger incentives for policy adoption than do mandatory existence disclosure regimes and also discourage “check the box” behavior. Future research should examine the impact of mandatory existence disclosure rules in the year that the regulation is implemented. Data Availability: Data are available from sources cited in the text.


Sign in / Sign up

Export Citation Format

Share Document