Exploring probabilistic seismic risk assessment accounting for seismicity clustering and damage accumulation: Part II. Risk analysis

2020 ◽  
pp. 875529302093881
Author(s):  
Athanasios N Papadopoulos ◽  
Paolo Bazzurro

In current practice, most earthquake risk models adopt a “declustered” view of seismicity, that is, they disregard foreshock, aftershock, and triggered earthquakes and model seismicity as a series of independent mainshock events, whose occurrence (typically) conforms to a Poisson process. This practice is certainly disputable but has been justified by the false notion that earthquakes of smaller magnitude than the mainshock cannot induce further damage than what was caused by the latter. A companion paper makes use of the epidemic-type aftershock sequence (ETAS) model fitted to Central Italy seismicity data to describe the full earthquake occurrence process, including “dependent” earthquakes. Herein, loss estimates for the region of Umbria in Central Italy are derived using stochastic event catalogs generated by means of the ETAS model and damage-dependent fragility functions to track damage accumulation. The results are then compared with estimates obtained with a conventional Poisson-based model. The potential gains of utilizing a model capable of capturing the spatiotemporal clustering features of seismicity are illustrated along with a discussion on the various details and challenges of such considerations.

2020 ◽  
pp. 875529302095733
Author(s):  
Athanasios N Papadopoulos ◽  
Paolo Bazzurro ◽  
Warner Marzocchi

Probabilistic seismic hazard analysis (PSHA), as a tool to assess the probability that ground motion of a given intensity or larger is experienced at a given site and time span, has historically comprised the basis of both building design codes in earthquake-prone regions and seismic risk models. The PSHA traditionally refers solely to mainshock events and typically employs a homogeneous Poisson process to model their occurrence. Nevertheless, recent disasters, such as the 2010–2011 Christchurch sequence or the 2016 Central Italy earthquakes, to name a few, have highlighted the potential pitfalls of neglecting the occurrence of foreshocks, aftershocks, and other triggered events, and pinpointed the need to revisit the current practice. Herein, we employ the epidemic-type aftershock sequence (ETAS) model to describe seismicity in Central Italy, investigate the model’s capability to reproduce salient features of observed seismicity, and compare ETAS-derived one-year hazard estimates with ones obtained with a standard mainshock-only Poisson-based hazard model. A companion paper uses the hazard models derived herein to compare and contrast loss estimates for the residential exposure of Umbria in Central Italy.


Author(s):  
Christian Grimm ◽  
Martin Käser ◽  
Sebastian Hainzl ◽  
Marco Pagani ◽  
Helmut Küchenhoff

ABSTRACT Earthquake sequences add a substantial hazard beyond the solely declustered perspective of common probabilistic seismic hazard analysis. A particularly strong driver for both social and economic losses are so-called earthquake doublets (more generally multiplets), that is, sequences of two (or more) comparatively large events in spatial and temporal proximity. Without differentiating between foreshocks and aftershocks, we hypothesize three main influencing factors of doublet occurrence: (1) the number of direct and secondary aftershocks triggered by an earthquake; (2) the occurrence of independent clusters and seismic background events in the same time–space window; and (3) the magnitude size distribution of triggered events (in contrast to independent events). We tested synthetic catalogs simulated by a standard epidemic-type aftershock sequence (ETAS) model for both Japan and southern California. Our findings show that the common ETAS approach significantly underestimates doublet frequencies compared with observations in historical catalogs. In combination with that the simulated catalogs show a smoother spatiotemporal clustering compared with the observed counterparts. Focusing on the impact on direct aftershock productivity and total cluster sizes, we propose two modifications of the ETAS spatial kernel to improve doublet rate predictions: (a) a restriction of the spatial function to a maximum distance of 2.5 estimated rupture lengths and (b) an anisotropic function with contour lines constructed by a box with two semicircular ends around the estimated rupture segment. These modifications shift the triggering potential from weaker to stronger events and consequently improve doublet rate predictions for larger events, despite still underestimating historic doublet occurrence rates. Besides, the results for the restricted spatial functions fulfill better the empirical Båth’s law for the maximum aftershock magnitude. The tested clustering properties of strong events are not sufficiently incorporated in typically used global catalog scale measures, such as log-likelihood values, which would favor the conventional, unrestricted models.


2015 ◽  
Vol 57 (6) ◽  
Author(s):  
Maura Murru ◽  
Jiancang Zhuang ◽  
Rodolfo Console ◽  
Giuseppe Falcone

<div class="page" title="Page 1"><div class="layoutArea"><div class="column"><p>In this paper, we compare the forecasting performance of several statistical models, which are used to describe the occurrence process of earthquakes in forecasting the short-term earthquake probabilities during the L’Aquila earthquake sequence in central Italy in 2009. These models include the Proximity to Past Earthquakes (PPE) model and two versions of the Epidemic Type Aftershock Sequence (ETAS) model. We used the information gains corresponding to the Poisson and binomial scores to evaluate the performance of these models. It is shown that both ETAS models work better than the PPE model. However, in comparing the two types of ETAS models, the one with the same fixed exponent coefficient (<span>alpha)</span> = 2.3 for both the productivity function and the scaling factor in the spatial response function (ETAS I), performs better in forecasting the active aftershock sequence than the model with different exponent coefficients (ETAS II), when the Poisson score is adopted. ETAS II performs better when a lower magnitude threshold of 2.0 and the binomial score are used. The reason is found to be that the catalog does not have an event of similar magnitude to the L’Aquila mainshock (M<sub>w</sub> 6.3) in the training period (April 16, 2005 to March 15, 2009), and the (<span>alpha)</span>-value is underestimated, thus the forecast seismicity is underestimated when the productivity function is extrapolated to high magnitudes. We also investigate the effect of the inclusion of small events in forecasting larger events. These results suggest that the training catalog used for estimating the model parameters should include earthquakes of magnitudes similar to the mainshock when forecasting seismicity during an aftershock sequence.</p></div></div></div>


2020 ◽  
Vol 91 (3) ◽  
pp. 1567-1578 ◽  
Author(s):  
Kevin R. Milner ◽  
Edward H. Field ◽  
William H. Savran ◽  
Morgan T. Page ◽  
Thomas H. Jordan

Abstract The first Uniform California Earthquake Rupture Forecast, Version 3–epidemic-type aftershock sequence (UCERF3-ETAS) aftershock simulations were running on a high-performance computing cluster within 33 min of the 4 July 2019 M 6.4 Searles Valley earthquake. UCERF3-ETAS, an extension of the third Uniform California Earthquake Rupture Forecast (UCERF3), is the first comprehensive, fault-based, epidemic-type aftershock sequence (ETAS) model. It produces ensembles of synthetic aftershock sequences both on and off explicitly modeled UCERF3 faults to answer a key question repeatedly asked during the Ridgecrest sequence: What are the chances that the earthquake that just occurred will turn out to be the foreshock of an even bigger event? As the sequence unfolded—including one such larger event, the 5 July 2019 M 7.1 Ridgecrest earthquake almost 34 hr later—we updated the model with observed aftershocks, finite-rupture estimates, sequence-specific parameters, and alternative UCERF3-ETAS variants. Although configuring and running UCERF3-ETAS at the time of the earthquake was not fully automated, considerable effort had been focused in 2018 on improving model documentation and ease of use with a public GitHub repository, command line tools, and flexible configuration files. These efforts allowed us to quickly respond and efficiently configure new simulations as the sequence evolved. Here, we discuss lessons learned during the Ridgecrest sequence, including sensitivities of fault triggering probabilities to poorly constrained finite-rupture estimates and model assumptions, as well as implications for UCERF3-ETAS operationalization.


2017 ◽  
Vol 33 (4) ◽  
pp. 1279-1299 ◽  
Author(s):  
Edward Field ◽  
Keith Porter ◽  
Kevin Milner

We present a prototype operational loss model based on UCERF3-ETAS, which is the third Uniform California Earthquake Rupture Forecast with an Epidemic Type Aftershock Sequence (ETAS) component. As such, UCERF3-ETAS represents the first earthquake forecast to relax fault segmentation assumptions and to include multi-fault ruptures, elastic-rebound, and spatiotemporal clustering, all of which seem important for generating realistic and useful aftershock statistics. UCERF3-ETAS is nevertheless an approximation of the system, however, so usefulness will vary and potential value needs to be ascertained in the context of each application. We examine this question with respect to statewide loss estimates, exemplifying how risk can be elevated by orders of magnitude due to triggered events following various scenario earthquakes. Two important considerations are the probability gains, relative to loss likelihoods in the absence of main shocks, and the rapid decay of gains with time. Significant uncertainties and model limitations remain, so we hope this paper will inspire similar analyses with respect to other risk metrics to help ascertain whether operationalization of UCERF3-ETAS would be worth the considerable resources required.


Author(s):  
G Petrillo ◽  
E Lippiello

Summary The Epidemic Type Aftershock Sequence (ETAS) model provides a good description of the post-seismic spatio-temporal clustering of seismicity and is also able to capture some features of the increase of seismic activity caused by foreshocks. Recent results, however, have shown that the number of foreshocks observed in instrumental catalogs is significantly much larger than the one predicted by the ETAS model. Here we show that it is possible to keep an epidemic description of post-seismic activity and, at the same time, to incorporate pre-seismic temporal clustering, related to foreshocks. Taking also into-account the short-term incompleteness of instrumental catalogs, we present a model which achieves very good description of the southern California seismicity both on the aftershock and on the foreshock side. Our results indicate that the existence of a preparatory phase anticipating mainshocks represents the most plausible explanation for the occurrence of foreshocks.


Author(s):  
Eugenio Lippiello ◽  
Cataldo Godano ◽  
Lucilla De Arcangelis

An increase of seismic activity is often observed before large earthquakes. Events responsible for this increase are usually named foreshock and their occurrence probably represents the most reliable precursory pattern. Many foreshocks statistical features can be interpreted in terms of the standard mainshock-to-aftershock triggering process and are recovered in the Epidemic Type Aftershock Sequence ETAS model. Here we present a statistical study of instrumental seismic catalogs from four different geographic regions. We focus on some common features of foreshocks in the four catalogs which cannot be reproduced by the ETAS model. In particular we find in instrumental catalogs a significantly larger number of foreshocks than the one predicted by the ETAS model. We show that this foreshock excess cannot be attributed to catalog incompleteness. We therefore propose a generalized formulation of the ETAS model, the ETAFS model, which explicitly includes foreshock occurrence. Statistical features of aftershocks and foreshocks in the ETAFS model are in very good agreement with instrumental results.


Entropy ◽  
2019 ◽  
Vol 21 (2) ◽  
pp. 173 ◽  
Author(s):  
Eugenio Lippiello ◽  
Cataldo Godano ◽  
Lucilla de Arcangelis

An increase of seismic activity is often observed before large earthquakes. Events responsible for this increase are usually named foreshock and their occurrence probably represents the most reliable precursory pattern. Many foreshocks statistical features can be interpreted in terms of the standard mainshock-to-aftershock triggering process and are recovered in the Epidemic Type Aftershock Sequence ETAS model. Here we present a statistical study of instrumental seismic catalogs from four different geographic regions. We focus on some common features of foreshocks in the four catalogs which cannot be reproduced by the ETAS model. In particular we find in instrumental catalogs a significantly larger number of foreshocks than the one predicted by the ETAS model. We show that this foreshock excess cannot be attributed to catalog incompleteness. We therefore propose a generalized formulation of the ETAS model, the ETAFS model, which explicitly includes foreshock occurrence. Statistical features of aftershocks and foreshocks in the ETAFS model are in very good agreement with instrumental results.


2018 ◽  
Vol 66 (6) ◽  
pp. 1359-1373 ◽  
Author(s):  
Nader Davoudi ◽  
Hamid Reza Tavakoli ◽  
Mehdi Zare ◽  
Abdollah Jalilian

2019 ◽  
Vol 109 (6) ◽  
pp. 2356-2366 ◽  
Author(s):  
Ganyu Teng ◽  
Jack W. Baker

Abstract This study is an evaluation of the suitability of several declustering method for induced seismicity and their impacts on hazard analysis of the Oklahoma–Kansas region. We considered the methods proposed by Gardner and Knopoff (1974), Reasenberg (1985), Zaliapin and Ben‐Zion (2013), and the stochastic declustering method (Zhuang et al., 2002) based on the epidemic‐type aftershock sequence (ETAS) model (Ogata, 1988, 1998). The results show that the choice of declustering method has a significant impact on the declustered catalog and the resulting hazard analysis of the Oklahoma–Kansas region. The Gardner and Knopoff method, which is currently implemented in the U.S. Geological Survey one‐year seismic‐hazard forecast for the central and eastern United States, has unexpected features when used for this induced seismicity catalog. It removes 80% of earthquakes and fails to reflect the changes in background rates that have occurred in the past few years. This results in a slight increase in the hazard level from 2016 to 2017, despite a decrease in seismic activities in 2017. The Gardner and Knopoff method also frequently identifies aftershocks with much stronger shaking intensities than their associated mainshocks. These features are mostly due to the window method implemented in the Gardner and Knopoff method. Compared with the Gardner and Knopoff method, the other three methods are able to capture the changing hazard level in the region. However, the ETAS model potentially overestimates the foreshock effect and generates negligible probabilities of large earthquakes being mainshocks. The Reasenberg and Zaliapin and Ben‐Zion methods have similar performance on catalog declustering and hazard analysis. Compared with the ETAS method, these two methods are easier to implement and faster to generate the declustered catalog. The results from this study suggest that both Reasenberg and Zaliapin and Ben‐Zion declustering methods are suitable for declustering and hazard analysis for induced seismicity in the Oklahoma–Kansas region.


Sign in / Sign up

Export Citation Format

Share Document