earthquake predictability
Recently Published Documents


TOTAL DOCUMENTS

41
(FIVE YEARS 21)

H-INDEX

10
(FIVE YEARS 1)

MAUSAM ◽  
2021 ◽  
Vol 50 (1) ◽  
pp. 99-104
Author(s):  
H. N. SRIVASTAVA ◽  
K. C. SINHA RAY

Based on about 75000 earthquakes in the California region detected through Parkfield network during the years 1969-1987, the occurrence of chaos was examined by two different approaches, namely, strange at tractor dimension and the Lyapunov exponent. The strange at tractor dimension was found as 6.3 in this region suggesting atleast 7 parameters for earthquake predictability. Small positive Lyapunov exponent of 0.045 provided further evidence for deterministic chaos in the region which showed strong dependence on the initial conditions. Implications of chaotic dynamics on characteristic Parkfield earthquakes has been discussed. The strange at tractor dimension in the region could be representative for the Transform type of plate boundary which is lower than that reported for continent collision type of plate boundary which is lower than that reported for continent collision type of plate boundary near Hindukush northwest Himalayan region.


MAUSAM ◽  
2021 ◽  
Vol 58 (4) ◽  
pp. 543-550
Author(s):  
H. N. SRIVASTAVA ◽  
S. N. BHATTACHARYA ◽  
D. T. RAO ◽  
S. SRIVASTAVA

Valsad district in south Gujarat near the western coast of the peninsular India experienced earthquake swarms since early February 1986.  Seismic monitoring through a network of micro earthquake seismographs showed a well concentrated seismic activity over an area of 7 × 10 km2 with the depth of foci extending from 1 to 15 km.  A total number of 21,830 earthquakes were recorded during March 1986 to June 1988.  The daily frequency of earthquakes for this period was utilized to examine deterministic chaos through evaluation of dimension of strange attractor and Lyapunov exponent.  The low dimension of 2.1 for the strange attractor and positive value of the largest Lyapunov exponent suggest chaotic dynamics in Valsad earthquake swarms with at least 3 parameters for earthquake predictability.  The results indicate differences in the characteristics of deterministic chaos in intraplate and interplate regions of India.


2021 ◽  
Vol 9 ◽  
Author(s):  
Giuseppe Falcone ◽  
Ilaria Spassiani ◽  
Yosef Ashkenazy ◽  
Avi Shapira ◽  
Rami Hofstetter ◽  
...  

Operational Earthquake Forecasting (OEF) aims to deliver timely and reliable forecasts that may help to mitigate seismic risk during earthquake sequences. In this paper, we build the first OEF system for the State of Israel, and we evaluate its reliability. This first version of the OEF system is composed of one forecasting model, which is based on a stochastic clustering Epidemic Type Earthquake Sequence (ETES) model. For every day of the forecasting time period, January 1, 2016 - November 15, 2020, the OEF-Israel system produces a weekly forecast for target earthquakes with local magnitudes greater than 4.0 and 5.5 in the entire State of Israel. Specifically, it provides space-time-dependent seismic maps of the weekly probabilities, obtained by using a fixed set of the model’s parameters, which are estimated through the maximum likelihood technique based on a learning period of about 32 years (1983–2015). According to the guidance proposed by the Collaboratory for the Study of Earthquake Predictability (CSEP), we also perform the N- and S-statistical tests to verify the reliability of the forecasts. Results show that the OEF system forecasts a number of events comparable to the observed one, and also captures quite well the spatial distribution of the real catalog with the exception of two target events that occurred in low seismicity regions.


2021 ◽  
Author(s):  
Yavor Kamer ◽  
Shyam Nandan ◽  
Stefan Hiemer ◽  
Guy Ouillon ◽  
Didier Sornette

<p>Nature is scary. You can be sitting at your home and next thing you know you are trapped under the ruble of your own house or sucked into a sinkhole. For millions of years we have been the figurines of this precarious scene and we have found our own ways of dealing with the anxiety. It is natural that we create and consume prophecies, conspiracies and false predictions. Information technologies amplify not only our rational but also irrational deeds. Social media algorithms, tuned to maximize attention, make sure that misinformation spreads much faster than its counterpart.</p><p>What can we do to minimize the adverse effects of misinformation, especially in the case of earthquakes? One option could be to designate one authoritative institute, set up a big surveillance network and cancel or ban every source of misinformation before it spreads. This might have worked a few centuries ago but not in this day and age. Instead we propose a more inclusive option: embrace all voices and channel them into an actual, prospective earthquake prediction platform (Kamer et al. 2020). The platform is powered by a global state-of-the-art statistical earthquake forecasting model that provides near real-time earthquake occurrence probabilities anywhere on the globe (Nandan et al. 2020). Using this model as a benchmark in statistical metrics specifically tailored to the prediction problem, we are able to distill all these voices and quantify the essence of predictive skill. This approach has several advantages. Rather than trying to silence or denounce, we listen and evaluate each claim and report the predictive skill of the source. We engage the public and allow them to take part in a scientific experiment that will increase their risk awareness. We effectively demonstrate that anybody with an internet connected device can make an earthquake prediction, but that it is not so trivial to achieve skillful predictive performance.</p><p>Here we shall present initial results from our global earthquake prediction experiment that we have been conducting on www.richterx.com for the past two years, yielding more than 10,000 predictions. These results will hopefully demystify the act of predicting an earthquake in the eyes of the public, and next time someone forwards a prediction message it would arouse more scrutiny than panic or distaste.<br><br>Nandan, S., Kamer, Y., Ouillon, G., Hiemer, S., Sornette, D. (2020). <em>Global models for short-term earthquake forecasting and predictive skill assessment</em>. European Physical Journal ST. doi: 10.1140/epjst/e2020-000259-3<br>Kamer, Y., Nandan, S., Ouillon, G., Hiemer, S., Sornette, D. (2020). <em>Democratizing earthquake predictability research: introducing the RichterX platform.</em> European Physical Journal ST. doi: 10.1140/epjst/e2020-000260-2 </p>


2021 ◽  
Author(s):  
Jose A. Bayona ◽  
William Savran ◽  
Maximilian Werner ◽  
David A. Rhoades

<p>Developing testable seismicity models is essential for robust seismic hazard assessments and to quantify the predictive skills of posited hypotheses about seismogenesis. On this premise, the Regional Earthquake Likelihood Models (RELM) group designed a joint forecasting experiment, with associated models, data and tests to evaluate earthquake predictability in California over a five-year period. Participating RELM forecast models were based on a range of geophysical datasets, including earthquake catalogs, interseismic strain rates, and geologic fault slip rates. After five years of prospective evaluation, the RELM experiment found that the smoothed seismicity (HKJ) model by Helmstetter et al. (2007) was the most informative. The diversity of competing forecast hypotheses in RELM was suitable for combining multiple models that could provide more informative earthquake forecasts than HKJ. Thus, Rhoades et al. (2014) created multiplicative hybrid models that involve the HKJ model as a baseline and one or more conjugate models. Particularly, the authors fitted two parameters for each conjugate model and an overall normalizing constant to optimize each hybrid model. Then, information gain scores per earthquake were computed using a corrected Akaike Information Criterion that penalized for the number of fitted parameters. According to retrospective analyses, some hybrid models showed significant information gains over the HKJ forecast, despite the penalty. Here, we assess in a prospective setting the predictive skills of 16 hybrids and 6 original RELM forecasts, using a suite of tests of the Collaboratory for the Study of Earthquake Predicitability (CSEP). The evaluation dataset contains 40 M≥4.95 events recorded within the California CSEP-testing region from 1 January 2011 to 31 December 2020, including the 2016 Mw 5.6, 5.6, and 5.5 Hawthorne earthquake swarm, and the Mw 6.4 foreshock and Mw 7.1 mainshock from the 2019 Ridgecrest sequence. We evaluate the consistency between the observed and the expected number, spatial, likelihood and magnitude distributions of earthquakes, and compare the performance of each forecast to that of HKJ. Our prospective test results show that none of the hybrid models are significantly more informative than the HKJ baseline forecast. These results are mainly due to the occurrence of the 2016 Hawthorne earthquake cluster, and four events from the 2019 Ridgecrest sequence in two forecast bins. These clusters of seismicity are exceptionally unlikely in all models, and insufficiently captured by the Poisson distribution that the likelihood functions of tests assume. Therefore, we are currently examining alternative likelihood functions that reduce the sensitivity of the evaluations to clustering, and that could be used to better understand whether the discrepancies between prospective and retrospective test results for multiplicative hybrid forecasts are due to limitations of the tests or the methods used to create the hybrid models. </p>


2021 ◽  
Author(s):  
Simone Mancini ◽  
Margarita Segou ◽  
Maximilian J. Werner

<p>Artificial intelligence methods are revolutionizing modern seismology by offering unprecedentedly rich seismic catalogs. Recent developments in short-term aftershock forecasting show that Coulomb rate-and-state (CRS) models hold the potential to achieve operational skills comparable to standard statistical Epidemic-Type Aftershock Sequence (ETAS) models, but only when the near real-time data quality allows to incorporate a more detailed representation of sources and receiver fault populations. In this framework, the high-resolution reconstructions of the seismicity patterns introduced by machine-learning-derived earthquake catalogs represent a unique opportunity to test whether they can be exploited to improve the predictive power of aftershock forecasts.</p><p>Here, we present a retrospective forecast experiment on the first year of the 2016-2017 Central Italy seismic cascade, where seven M5.4+ earthquakes occurred between a few hours and five months after the initial Mw 6.0 event, migrating over a 60-km long normal fault system. As target dataset, we employ the best available high-density machine learning catalog recently released for the sequence, which reports ~1 million events in total (~22,000 with M ≥ 2).</p><p>First, we develop a CRS model featuring (1) rate-and-state variables optimized on 30 years of pre-sequence regional seismicity, (2) finite fault slip models for the seven mainshocks of the sequence, (3) spatially heterogeneous receivers informed by pre-existing faults, and (4) updating receiver fault populations using focal planes gradually revealed by aftershocks. We then test the effect of considering stress perturbations from the M2+ events. Using the same high-precision catalog, we produce a standard ETAS model to benchmark the stress-based counterparts. All models are developed on a 3D spatial grid with 2 km spacing; they are updated daily and seek to forecast the space-time occurrence of M2+ seismicity for a total forecast horizon of one year. We formally rank the forecasts with the statistical scoring metrics introduced by the Collaboratory for the Study of Earthquake Predictability and compare their performance to a generation of CRS and ETAS models previously published for the same sequence by Mancini et al. (2019), who used solely real-time data and a minimum triggering magnitude of M=3.</p><p>We find that considering secondary triggering effects from events down to M=2 slightly improves model performance. While this result highlights the importance of better seismic catalogs to model local triggering mechanisms, it also suggests that to appreciate their full potential future modelling efforts will likely have to incorporate also fine-scale rupture characterizations (e.g., smaller source fault geometries retrieved from enhanced focal mechanism catalogs) and introduce denser spatial model discretizations.</p>


2021 ◽  
Author(s):  
Yavor Kamer ◽  
Shyam Nandan ◽  
Stefan Hiemer ◽  
Guy Ouillon ◽  
Didier Sornette

<p>Recent advances in machine learning and pattern recognition methods have propagated into various applications in seismology. Phase picking, earthquake location, anomaly detection and classification applications have benefited also from the increased availability of cloud computing and open-source software libraries. However, applications of these new techniques to the problems of earthquake forecasting and prediction have remained relatively stagnant.</p><p>The main challenges in this regard have been the testing and validation of the proposed methods. While there are established metrics to quantify the performance of algorithms in common pattern recognition and classification problems, the earthquake prediction problem requires a properly defined reference (null) model to establish the information gain of a proposed algorithm. This complicates the development of new methods, as researchers are required to develop not only a novel algorithm but also a sufficiently robust null model to test it against.</p><p>We propose a solution to this problem. We have recently introduced a global real-time earthquake forecasting model that can provide occurrence probabilities for a user defined time-space-magnitude window anywhere on the globe (Nandan et al. 2020). In addition, we have proposed the Information Ratio (IR) metric that can rank algorithms producing alarm based deterministic predictions as well as those producing probabilistic forecasts (Kamer et al. 2020). To provide the community with a retrospective benchmark, we have run our model in a pseudoprospective fashion for the last 30 years (1990-2020). We have calculated and stored the earthquake occurrence probabilities for each day, for the whole globe (at ~40km resolution) for various time-space windows (7 to 30 days, 75 to 300 km). These can be queried programmatically via an Application Programmable Interface (API) allowing model developers to train and test their algorithms retrospectively. Here we shall present how the Rx TimeMachine API is used for the training of a simple pattern recognition algorithm and show the algorithm's prospective predictive performance.<br><br>Nandan, S., Kamer, Y., Ouillon, G., Hiemer, S., Sornette, D. (2020). <em>Global models for short-term earthquake forecasting and predictive skill assessment</em>. European Physical Journal ST. doi: 10.1140/epjst/e2020-000259-3 <br>Kamer, Y., Nandan, S., Ouillon, G., Hiemer, S., Sornette, D. (2020). <em>Democratizing earthquake predictability research: introducing the RichterX platform</em>. European Physical Journal ST. doi: 10.1140/epjst/e2020-000260-2 </p>


2021 ◽  
Author(s):  
Francesco Serafini ◽  
Mark Naylor ◽  
Finn Lindgren ◽  
Maximilian Werner

<p>Recent years have seen a growth in the diversity of probabilistic earthquake forecasts as well as the advent of them being applied operationally. The growth of their use demands a deeper look at our ability to rank their performance within a transparent and unified framework. Programs such as the Collaboratory Study for Earthquake Predictability (CSEP)  have been at the forefront of this effort. Scores are quantitative measures of how well a dataset can be explained by a candidate forecast and allow forecasts to be ranked. A positively oriented score is said to be proper when, on average, the highest score is achieved by the closest model to the data generating one. Different meanings of closest lead to different proper scoring rules. Here, we prove that the Parimutuel Gambling score, used to evaluate the results of the 2009 Italy CSEP experiment, is generally not proper, and even for the special case where it is proper, it can still be used improperly. We show in detail the possible consequences of using this score for forecast evaluation. Moreover, we show that other well-established scores can be applied to existing studies to calculate new rankings with no requirement for extra information. We extend the analysis to show how much data are required, in principle, to distinguish candidate forecasts and therefore how likely it is to express a preference towards a forecast. This introduces the possibility of survey design with regard to the duration and spatial discretisation of earthquake forecasts. Our findings may contribute to more rigorous statements about the ability to distinguish between the predictive skills of candidate forecasts in addition to simple rankings.</p>


2021 ◽  
Author(s):  
Karina Loviknes ◽  
Danijel Schorlemmer ◽  
Fabrice Cotton ◽  
Sreeram Reddy Kotha

<p>Non-linear site effects are mainly expected for strong ground motions and sites with soft soils and more recent ground-motion models (GMM) have started to include such effects. Observations in this range are, however, sparse, and most non-linear site amplification models are therefore partly or fully based on numerical simulations. We develop a framework for testing of non-linear site amplification models using data from the comprehensive Kiban-Kyoshin network in Japan. The test is reproducible, following the vision of the Collaboratory for the Study of Earthquake Predictability (CSEP), and takes advantage of new large datasets to evaluate <span>whether or not</span> non-linear site effects predicted by site-amplification models are supported by empirical data. The site amplification models are tested using residuals between the observations and predictions from a GMM based only on magnitude and distance. When the GMM is derived without any site term, the site-specific variability extracted from the residuals is expected to capture the site response of a site. The non-linear site amplification models are tested against a linear amplification model on individual well-record<span>ing</span> stations. Finally, the result is compared to building codes where non-linearity is included. The test shows that for most of the sites selected as having sufficient records, the non-linear site-amplification models do not score better than the linear amplification model. This suggests that including non-linear site amplification in GMMs and building codes may not yet be justified, at least not in the range of ground motions considered in the test (peak ground acceleration < 0.2 g).</p>


Sign in / Sign up

Export Citation Format

Share Document