Comparison of recent probabilistic seismic hazard maps for southern California

1998 ◽  
Vol 88 (3) ◽  
pp. 855-861
Author(s):  
Mark W. Stirling ◽  
Steven G. Wesnousky

Abstract Probabilistic seismic hazard (PSH) maps for southern California produced from the models of Ward (1994), the Working Group on California Earthquake Probabilities (1995), and the U.S. Geological Survey and California Division of Mines and Geology (Frankel et al., 1996; Petersen et al., 1996) show the peak ground accelerations predicted with each model to occur at 10% probability in 50 years, and the probability that 0.2 g will occur in 30 years, for “rock” site conditions. Differences among the maps range up to 0.4 g and 50%, respectively. We examine the locations and magnitudes of the differences as a basis to define the issues and avenues of research that may lead to more confident estimates of PSH in the future. Our analysis shows that three major factors contribute to the observed differences between the maps. They are the size of maximum magnitude assigned to a given fault, the proportion of predicted earthquakes that are distributed off the major faults, and the use of geodetic strain data to predict earthquake rates.

Author(s):  
V L Stevens ◽  
J-P Avouac

Summary The increasing density of geodetic measurements makes it possible to map surface strain rate in many zones of active tectonics with unprecedented spatial resolution. Here we show that the strain tensor rate calculated from GPS in the India-Asia collision zone represents well the strain released in earthquakes. This means that geodetic data in the India-Asia collision zone region can be extrapolated back in time to estimate strain buildup on active faults, or the kinematics of continental deformation. We infer that the geodetic strain rates can be assumed stationary through time on the timescale needed to build up the elastic strain released by larger earthquakes, and that they can be used to estimate the probability of triggering earthquakes. We show that the background seismicity rate correlates with the geodetic strain rate. A good fit is obtained assuming a linear relationship ($\dot{N} = \lambda \ \cdot \dot{\epsilon }$ where $\dot{N}$ is the density of the rate of Mw ≥ 4 earthquakes, $\dot{\epsilon }$ is strain rate and λ = 2.5 ± 0.1 × 10−3 m−2), as would be expected from a standard Coulomb failure model. However, the fit is significantly better for a non-linear relationship ($\dot{N} = \gamma _1 \cdot \dot{\epsilon }^{\gamma _2}$ with γ1 = 2.5 ± 0.6 m−2 and γ2 = 1.42 ± 0.15). The b-value of the Gutenberg-Richter law, which characterize the magnitude-frequency distribution, is found to be insensitive to the strain rate. In the case of a linear correlation between seismicity and strain rate, the maximum magnitude earthquake, derived from the moment conservation principle, is expected to be independent of the strain rate. By contrast, the non-linear case implies that the maximum magnitude earthquake would be larger in zones of low strain rate. We show that within areas of constant strain rate, earthquakes above Mw4 follow a Poisson distribution in time and and are uniformly distributed in space. These findings provide a framework to estimate the probability of occurrence and magnitude of earthquakes as a function of the geodetic strain rate. We describe how the seismicity models derived from this approach can be used as an input for probabilistic seismic hazard analysis. This method is easy to automatically update, and can be applied in a consistent manner to any continental zone of active tectonics with sufficient geodetic coverage.


2009 ◽  
Vol 99 (2A) ◽  
pp. 585-610 ◽  
Author(s):  
A. Akinci ◽  
F. Galadini ◽  
D. Pantosti ◽  
M. Petersen ◽  
L. Malagnini ◽  
...  

2021 ◽  
Author(s):  
Molly Gallahue ◽  
Leah Salditch ◽  
Madeleine Lucas ◽  
James Neely ◽  
Susan Hough ◽  
...  

<div> <p>Probabilistic seismic hazard assessments forecast levels of earthquake shaking that should be exceeded with only a certain probability over a given period of time are important for earthquake hazard mitigation. These rely on assumptions about when and where earthquakes will occur, their size, and the resulting shaking as a function of distance as described by ground-motion models (GMMs) that cover broad geologic regions. Seismic hazard maps are used to develop building codes.</p> </div><div> <p>To explore the robustness of maps’ shaking forecasts, we consider how maps hindcast past shaking. We have compiled the California Historical Intensity Mapping Project (CHIMP) dataset of the maximum observed seismic intensity of shaking from the largest Californian earthquakes over the past 162 years. Previous comparisons between the maps for a constant V<sub>S30</sub> (shear-wave velcoity in the top 30 m of soil) of 760 m/s and CHIMP based on several metrics suggested that current maps overpredict shaking.</p> <p>The differences between the V<sub>S30</sub> at the CHIMP sites and the reference value of 760 m/s could amplify or deamplify the ground motions relative to the mapped values. We evaluate whether the V<sub>S30 </sub>at the CHIMP sites could cause a possible bias in the models. By comparison with the intensity data in CHIMP, we find that using site-specific V<sub>S30</sub> does not improve map performance, because the site corrections cause only minor differences from the original 2018 USGS hazard maps at the short periods (high frequencies) relevant to peak ground acceleration and hence MMI. The minimal differences reflect the fact that the nonlinear deamplification due to increased soil damping largely offsets the linear amplification due to low V<sub>S30</sub>. The net effects will be larger for longer periods relevant to tall buildings, where net amplification occurs. </p> <div> <p>Possible reasons for this discrepancy include limitations of the dataset, a bias in the hazard models, an over-estimation of the aleatory variability of the ground motion or that seismicity throughout the historical period has been lower than the long-term average, perhaps by chance due to the variability of earthquake recurrence. Resolving this discrepancy, which is also observed in Italy and Japan, could improve the performance of seismic hazard maps and thus earthquake safety for California and, by extension, worldwide. We also explore whether new nonergodic GMMs, with reduced aleatory variability, perform better than presently used ergodic GMMs compared to historical data.</p> </div> </div>


Author(s):  
K L Johnson ◽  
M Pagani ◽  
R H Styron

Summary The southern Pacific Islands region is highly seismically active, and includes earthquakes from four major subduction systems, seafloor fracture zones and transform faults, and other sources of crustal seismicity. Since 1900, the area has experienced >350 earthquakes of M > 7.0, including 11 of M ≥ 8.0. Given the elevated threat of earthquakes, several probabilistic seismic hazard analyses have been published for this region or encompassed subregions; however, those that are publicly accessible do not provide complete coverage of the region using homogeneous methodologies. Here, we present a probabilistic seismic hazard model for the southern Pacific Islands that comprehensively covers the Solomon Islands in the northwest to the Tonga islands in the southeast. The seismic source model accounts for active shallow crustal seismicity with seafloor faults and gridded smoothed seismicity, subduction interfaces using faults with geometries defined based on geophysical datasets and models, and intraslab seismicity modelled by a set of ruptures that occupy the slab volume. Each source type is assigned occurrence rates based on sub-catalogues classified to each respective tectonic context. Subduction interface and crustal fault occurrence rates also incorporate a tectonic component based on their respective characteristic earthquakes. We demonstrate the use of non-standard magnitude-frequency distributions to reproduce the observed occurrence rates. For subduction interface sources, we use various versions of the source model to account for epistemic uncertainty in factors impacting the maximum magnitude earthquake permissible by each source, varying the interface lower depth and segmentation as well as the magnitude scaling relationship used to compute the maximum magnitude earthquake and subsequently its occurrence rate. The ground motion characterisation uses a logic tree that weights three ground motion prediction equations for each tectonic region. We compute hazard maps for 10% and 2% probability of exceedance in 50 years on rock sites, discussing the regional distribution of peak ground acceleration and spectral acceleration with a period of 1.0 s, honing in on the hazard curves and uniform hazard spectra of several capital or populous cities and drawing comparisons to other recent hazard models. The results reveal that the most hazardous landmasses are the island chains closest to subduction trenches, as well as localised areas with high rates of seismicity occurring in active shallow crust. We use seismic hazard disaggregation to demonstrate that at selected cities located above subduction zones, the PGA with 10% probability of exceedance in 50 years is controlled by Mw > 7.0 subduction interface and intraslab earthquakes, while at cities far from subduction zones, Mw < 6.5 crustal earthquakes contribute most. The model is used for southern Pacific Islands coverage in the Global Earthquake Model Global Hazard Mosaic.


2017 ◽  
Vol 17 (11) ◽  
pp. 2017-2039 ◽  
Author(s):  
Alessandro Valentini ◽  
Francesco Visini ◽  
Bruno Pace

Abstract. Italy is one of the most seismically active countries in Europe. Moderate to strong earthquakes, with magnitudes of up to ∼ 7, have been historically recorded for many active faults. Currently, probabilistic seismic hazard assessments in Italy are mainly based on area source models, in which seismicity is modelled using a number of seismotectonic zones and the occurrence of earthquakes is assumed uniform. However, in the past decade, efforts have increasingly been directed towards using fault sources in seismic hazard models to obtain more detailed and potentially more realistic patterns of ground motion. In our model, we used two categories of earthquake sources. The first involves active faults, and using geological slip rates to quantify the seismic activity rate. We produced an inventory of all fault sources with details of their geometric, kinematic, and energetic properties. The associated parameters were used to compute the total seismic moment rate of each fault. We evaluated the magnitude–frequency distribution (MFD) of each fault source using two models: a characteristic Gaussian model centred at the maximum magnitude and a truncated Gutenberg–Richter model. The second earthquake source category involves grid-point seismicity, with a fixed-radius smoothed approach and a historical catalogue were used to evaluate seismic activity. Under the assumption that deformation is concentrated along faults, we combined the MFD derived from the geometry and slip rates of active faults with the MFD from the spatially smoothed earthquake sources and assumed that the smoothed seismic activity in the vicinity of an active fault gradually decreases by a fault-size-driven factor. Additionally, we computed horizontal peak ground acceleration (PGA) maps for return periods of 475 and 2475 years. Although the ranges and gross spatial distributions of the expected accelerations obtained here are comparable to those obtained through methods involving seismic catalogues and classical zonation models, the spatial pattern of the hazard maps obtained with our model is far more detailed. Our model is characterized by areas that are more hazardous and that correspond to mapped active faults, while previous models yield expected accelerations that are almost uniformly distributed across large regions. In addition, we conducted sensitivity tests to determine the impact on the hazard results of the earthquake rates derived from two MFD models for faults and to determine the relative contributions of faults versus distributed seismic activity. We believe that our model represents advancements in terms of the input data (quantity and quality) and methodology used in the field of fault-based regional seismic hazard modelling in Italy.


1991 ◽  
Vol 4 (1) ◽  
pp. 1-6
Author(s):  
Aristoteles Vergara Mu�oz

2013 ◽  
Vol 8 (5) ◽  
pp. 861-868 ◽  
Author(s):  
Nobuoto Nojima ◽  
◽  
Satoshi Fujikawa ◽  
Yutaka Ishikawa ◽  
Toshihiko Okumura ◽  
...  

With the aim of better understanding and more effective utilization of probabilistic seismic hazard maps in Japan, exposure analysis has been carried out by combining hazard maps with population distribution maps. Approximately 80% of the population of Japan is exposed to a relatively high seismic hazard, i.e., a 3% probability of exceeding JMAseismic intensity 6 lower within 30 years. In highly populated areas, specifically in major metropolitan areas, seismic hazard tends to relatively high because of the site amplification effects of holocene deposits. In implementing earthquake disaster mitigation measures, it is important to consider the overlapping effect of seismic hazard and demographic distributions.


Sign in / Sign up

Export Citation Format

Share Document