epidemic type aftershock sequence
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 29)

H-INDEX

10
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Christian Grimm ◽  
Sebastian Hainzl ◽  
Martin Käser ◽  
Helmut Küchenhoff

Abstract Strong earthquakes cause aftershock sequences that are clustered in time according to a power decay law, and in space along their extended rupture, shaping a typically elongate pattern of aftershock locations. A widely used approach to model seismic clustering is the Epidemic Type Aftershock Sequence (ETAS) model, that shows three major biases: First, the conventional ETAS approach assumes isotropic spatial triggering, which stands in conflict with observations and geophysical arguments for strong earthquakes. Second, the spatial kernel has unlimited extent, allowing smaller events to exert disproportionate trigger potential over an unrealistically large area. Third, the ETAS model assumes complete event records and neglects inevitable short-term aftershock incompleteness as a consequence of overlapping coda waves. These three effects can substantially bias the parameter estimation and particularly lead to underestimated cluster sizes. In this article, we combine the approach of Grimm (2021), which introduced a generalized anisotropic and locally restricted spatial kernel, with the ETAS-Incomplete (ETASI) time model of Hainzl (2021), to define an ETASI space-time model with flexible spatial kernel that solves the abovementioned shortcomings. We apply different model versions to a triad of forecasting experiments of the 2019 Ridgecrest sequence, and evaluate the prediction quality with respect to cluster size, largest aftershock magnitude and spatial distribution. The new model provides the potential of more realistic simulations of on-going aftershock activity, e.g.~allowing better predictions of the probability and location of a strong, damaging aftershock, which might be beneficial for short term risk assessment and desaster response.


Author(s):  
Edward H. Field ◽  
Kevin R. Milner ◽  
Nicolas Luco

ABSTRACT We use the Third Uniform California Earthquake Rupture Forecast (UCERF3) epidemic-type aftershock sequence (ETAS) model (UCERF3-ETAS) to evaluate the effects of declustering and Poisson assumptions on seismic hazard estimates. Although declustering is necessary to infer the long-term spatial distribution of earthquake rates, the question is whether it is also necessary to honor the Poisson assumption in classic probabilistic seismic hazard assessment. We use 500,000 yr, M ≥ 2.5 synthetic catalogs to address this question, for which UCERF3-ETAS exhibits realistic spatiotemporal clustering effects (e.g., aftershocks). We find that Gardner and Knopoff (1974) declustering, used in the U.S. Geological Survey seismic hazard models, lowers 2% in 50 yr and risk-targeted ground-motion hazard metrics by about 4% on average (compared with the full time-dependent [TD] model), with the reduction being 5% at 40% in 50 yr ground motions. Keeping all earthquakes and treating them as a Poisson process increases these same hazard metrics by about 3%–12%, on average, due to the removal of relatively quiet time periods in the full TD model. In the interest of model simplification, bias minimization, and consideration of the probabilities of multiple exceedances, we agree with others (Marzocchi and Taroni, 2014) that we are better off keeping aftershocks and treating them as a Poisson process rather than removing them from hazard consideration via declustering. Honoring the true time dependence, however, will likely be important for other hazard and risk metrics, and this study further exemplifies how this can now be evaluated more extensively.


Author(s):  
Sebastian Hainzl

ABSTRACT The epidemic-type aftershock sequence (ETAS) model is a powerful statistical model to explain and forecast the spatiotemporal evolution of seismicity. However, its parameter estimation can be strongly biased by catalog deficiencies, particularly short-term incompleteness related to missing events in phases of high-seismic activity. Recent studies have shown that these short-term fluctuations of the completeness magnitude can be explained by the blindness of detection algorithms after earthquakes, preventing the detection of events with a smaller magnitude. Based on this assumption, I derive a direct relation between the true and detectable seismicity rate and magnitude distributions, respectively. These relations only include one additional parameter, the so-called blind time Tb, and lead to a closed-form maximum-likelihood formulation to estimate the ETAS parameters directly accounting for varying completeness. Tests using synthetic simulations show that the true parameters can be resolved from incomplete catalogs. Finally, I apply the new model to California’s most prominent mainshock–aftershock sequences in the last decades. The results show that the model leads to superior fits with Tb decreasing with time, indicating improved detection algorithms. The estimated parameters significantly differ from the estimation with the standard approach, indicating higher b-values and larger trigger potentials than previously thought.


Author(s):  
Christian Grimm ◽  
Martin Käser ◽  
Sebastian Hainzl ◽  
Marco Pagani ◽  
Helmut Küchenhoff

ABSTRACT Earthquake sequences add a substantial hazard beyond the solely declustered perspective of common probabilistic seismic hazard analysis. A particularly strong driver for both social and economic losses are so-called earthquake doublets (more generally multiplets), that is, sequences of two (or more) comparatively large events in spatial and temporal proximity. Without differentiating between foreshocks and aftershocks, we hypothesize three main influencing factors of doublet occurrence: (1) the number of direct and secondary aftershocks triggered by an earthquake; (2) the occurrence of independent clusters and seismic background events in the same time–space window; and (3) the magnitude size distribution of triggered events (in contrast to independent events). We tested synthetic catalogs simulated by a standard epidemic-type aftershock sequence (ETAS) model for both Japan and southern California. Our findings show that the common ETAS approach significantly underestimates doublet frequencies compared with observations in historical catalogs. In combination with that the simulated catalogs show a smoother spatiotemporal clustering compared with the observed counterparts. Focusing on the impact on direct aftershock productivity and total cluster sizes, we propose two modifications of the ETAS spatial kernel to improve doublet rate predictions: (a) a restriction of the spatial function to a maximum distance of 2.5 estimated rupture lengths and (b) an anisotropic function with contour lines constructed by a box with two semicircular ends around the estimated rupture segment. These modifications shift the triggering potential from weaker to stronger events and consequently improve doublet rate predictions for larger events, despite still underestimating historic doublet occurrence rates. Besides, the results for the restricted spatial functions fulfill better the empirical Båth’s law for the maximum aftershock magnitude. The tested clustering properties of strong events are not sufficiently incorporated in typically used global catalog scale measures, such as log-likelihood values, which would favor the conventional, unrestricted models.


Author(s):  
Simone Mancini ◽  
Maximilian Jonas Werner ◽  
Margarita Segou ◽  
Brian Baptie

Abstract The development of robust forecasts of human-induced seismicity is highly desirable to mitigate the effects of disturbing or damaging earthquakes. We assess the performance of a well-established statistical model, the epidemic-type aftershock sequence (ETAS) model, with a catalog of ∼93,000 microearthquakes observed at the Preston New Road (PNR, United Kingdom) unconventional shale gas site during, and after hydraulic fracturing of the PNR-1z and PNR-2 wells. Because ETAS was developed for slower loading rate tectonic seismicity, to account for seismicity caused by pressurized fluid, we also generate three modified ETAS with background rates proportional to injection rates. We find that (1) the standard ETAS captures low seismicity between and after injections but is outperformed by the modified model during high-seismicity periods, and (2) the injection-rate driven ETAS substantially improves when the forecast is calibrated on sleeve-specific pumping data. We finally forecast out-of-sample the PNR-2 seismicity using the average response to injection observed at PNR-1z, achieving better predictive skills than the in-sample standard ETAS. The insights from this study contribute toward producing informative seismicity forecasts for real-time decision making and risk mitigation techniques during unconventional shale gas development.


Author(s):  
Gordon J. Ross

ABSTRACT The epidemic-type aftershock sequence (ETAS) model is widely used in seismic forecasting. However, most studies of ETAS use point estimates for the model parameters, which ignores the inherent uncertainty that arises from estimating these from historical earthquake catalogs, resulting in misleadingly optimistic forecasts. In contrast, Bayesian statistics allows parameter uncertainty to be explicitly represented and fed into the forecast distribution. Despite its growing popularity in seismology, the application of Bayesian statistics to the ETAS model has been limited by the complex nature of the resulting posterior distribution, which makes it infeasible to apply to catalogs containing more than a few hundred earthquakes. To combat this, we develop a new framework for estimating the ETAS model in a fully Bayesian manner, which can be efficiently scaled up to large catalogs containing thousands of earthquakes. We also provide easy-to-use software that implements our method.


Author(s):  
Hideo Aochi ◽  
Julie Maury ◽  
Thomas Le Guenan

Abstract The seismicity evolution in Oklahoma between 2010 and 2018 is analyzed systematically using an epidemic-type aftershock sequence model. To retrieve the nonstationary seismicity component, we systematically use a moving window of 200 events, each within a radius of 20 km at grid points spaced every 0.2°. Fifty-three areas in total are selected for our analysis. The evolution of the background seismicity rate μ is successfully retrieved toward its peak at the end of 2014 and during 2015, whereas the triggering parameter K is stable, slightly decreasing when the seismicity is activated. Consequently, the ratio of μ to the observed seismicity rate is not stationary. The acceleration of μ can be fit with an exponential equation relating μ to the normalized injected volume. After the peak, the attenuation phase can be fit with an exponential equation with time since peak as the independent variable. As a result, the evolution of induced seismicity can be followed statistically after it begins. The turning points, such as activation of the seismicity and timing of the peak, are difficult to identify solely from this statistical analysis and require a subsequent mechanical interpretation.


2021 ◽  
Author(s):  
Ester Manganiello ◽  
Marcus Herrmann ◽  
Warner Marzocchi

<p>The ability to forecast large earthquakes on short time scales is strongly limited by our understanding of the earthquake nucleation process. Foreshocks represent promising seismic signals that may improve earthquake forecasting as they precede many large earthquakes. However, foreshocks can currently only be identified as such after a large earthquake occurred. This inability is because it remains unclear whether foreshocks represent a different physical process than general seismicity (i.e., mainshocks and aftershocks). Several studies compared foreshock occurrence in real and synthetic catalogs, as simulated with a well-established earthquake triggering/forecasting model called Epidemic-Type Aftershock Sequence (ETAS) that does not discriminate between foreshocks, mainshocks, and aftershocks. Some of these studies show that the spatial distribution of foreshocks encodes information about the subsequent mainshock magnitude and that foreshock activity is significantly higher than predicted by the ETAS model. These findings attribute a unique underlying physical process to foreshocks, making them potentially useful for forecasting large earthquakes. We reinvestigate these scientific questions using high-quality earthquake catalogs and study carefully the influence of subjective parameter choices and catalog artifacts on the results. For instance, we use data from different regions, account for the short-term catalog incompleteness and its spatial variability, and explore different criteria for sequence selection and foreshock definition.</p>


2021 ◽  
Author(s):  
Shubham Sharma ◽  
Shyam Nandan ◽  
Sebastian Hainzl

<p>Currently, the Epidemic Type Aftershock Sequence (ETAS) model is state-of-the-art for forecasting aftershocks. However, the under-performance of ETAS in forecasting the spatial distribution of aftershocks following a large earthquake make us adopt alternative approaches for the modelling of the spatial ETAS-kernel. Here we develop a hybrid physics and statics based forecasting model. The model uses stress changes, calculated from inverted slip models of large earthquakes, as the basis of the spatial kernel in the ETAS model in order to get more reliable estimates of spatiotemporal distribution of aftershocks. We evaluate six alternative approaches of stress-based ETAS-kernels and rank their performance against the base ETAS model. In all cases, an expectation maximization (EM) algorithm is used to estimate the ETAS parameters. The model approach has been tested on synthetic data to check if the known parameters can be inverted successfully. We apply the proposed method to forecast aftershocks of mainshocks available in SRCMOD database, which includes 192 mainshocks with magnitudes in the range between 4.1 and 9.2 occurred from 1906 to 2020. The probabilistic earthquake forecasts generated by the hybrid model have been tested using established CSEP test metrics and procedures. We show that the additional stress information, provided to estimate the spatial probability distribution, leads to more reliable spatiotemporal ETAS-forecasts of aftershocks as compared to the base ETAS model.</p>


2021 ◽  
Author(s):  
Christian Grimm ◽  
Martin Käser ◽  
Sebastian Hainzl ◽  
Marco Pagani ◽  
Helmut Küchenhoff

<p>Earthquake sequences add significant hazard beyond the solely declustered perspective of common probabilistic seismic hazard analysis (PSHA). A particularly strong driver for both social and economic losses are so-called earthquake doublets (more generally multiplets), i.e. sequences of two (or more) comparatively large events in spatial and temporal proximity. Not differentiating between foreshocks and aftershocks, we hypothesize three main drivers of doublet occurrence: (1) the number of direct aftershocks triggered by an earthquake; (2) the underlying, independent background seismicity in the same time-space window; and (3) the magnitude size distribution of triggered events (in contrast to independent events). We tested synthetic catalogs simulated by a common, isotropic epidemic type aftershock sequence (ETAS) model for both Japan and Southern California. Our findings show that the standard ETAS approach dramatically underestimates doublet frequencies compared to observations in historical catalogs. Among others, the results partially smooth out pronounced peaks of temporal and spatial event clustering. Focusing on the impact on direct aftershock productivity, we propose two modifications of the ETAS spatial kernel in order to improve doublet rate predictions: (a) a restriction of the spatial function to a maximum distance of 2.5 estimated rupture lengths; (b) an anisotropic function with contour lines constructed by a box with two semicircular ends around the estimated rupture line. The restriction of the spatial extent shifts triggering potential from weaker to stronger events and in consequence improves doublet rate predictions for larger events. However, this improvement goes at the cost of a weaker overall model fit according to AIC. The anisotropic models improve the overall model fit, but have minor impact on doublet occurrence rate predictions.</p>


Sign in / Sign up

Export Citation Format

Share Document