scholarly journals Assessing ‘alarm-based CN’ earthquake predictions in Italy

2017 ◽  
Vol 59 (6) ◽  
Author(s):  
Matteo Taroni ◽  
Warner Marzocchi ◽  
Pamela Roselli

<p>The quantitative assessment of the performance of earthquake prediction and/or forecast models is essential for evaluating their applicability for risk reduction purposes. Here we assess the earthquake prediction performance of the CN model applied to the Italian territory. This model has been widely publicized in Italian news media, but a careful assessment of its prediction performance is still lacking. In this paper we evaluate the results obtained so far from the CN algorithm applied to the Italian territory, by adopting widely used testing procedures and under development in the Collaboratory for the Study of Earthquake Predictability (CSEP) network. Our results show that the CN prediction performance is comparable to the prediction performance of the stationary Poisson model, that is, CN predictions do not add more to what may be expected from random chance.</p>

1999 ◽  
Vol 42 (5) ◽  
Author(s):  
A. Peresan ◽  
G. Costa ◽  
G. F. Panza

A regionalization of the Italian territory, strictly based on seismotectonic zoning and the main geodynamic features of the Italian area, is proposed for intermediate-term earthquake prediction with CN algorithm. Three regions, composed of adjacent zones with the same seismogenic behaviour or with transitional properties, are selected for the north, centre and south of Italy, compatibly with the kinematic model. This regionalization allows us an average reduction of the spatial uncertainty of about 35% for the northern and central regions, and of about 70% for the southern region in comparison with previous studies. A general reduction of the percentage of total TIPs, with respect to the results obtained neglecting the seismotectonic zoning, has been observed as well. Therefore, it seems that the seismotectonic model is a useful tool selection of the fault systems involved in the preparation of strong earthquakes. The successful attempt of catalogue upgrading, accomplished using the NEIC Preliminary Determinations of Epicentres, appears to substantiate the robustness of the algorithm against changes in the catalogue.


2021 ◽  
Author(s):  
Yavor Kamer ◽  
Shyam Nandan ◽  
Stefan Hiemer ◽  
Guy Ouillon ◽  
Didier Sornette

&lt;p&gt;Nature is scary. You can be sitting at your home and next thing you know you are trapped under the ruble of your own house or sucked into a sinkhole. For millions of years we have been the figurines of this precarious scene and we have found our own ways of dealing with the anxiety. It is natural that we create and consume prophecies, conspiracies and false predictions. Information technologies amplify not only our rational but also irrational deeds. Social media algorithms, tuned to maximize attention, make sure that misinformation spreads much faster than its counterpart.&lt;/p&gt;&lt;p&gt;What can we do to minimize the adverse effects of misinformation, especially in the case of earthquakes? One option could be to designate one authoritative institute, set up a big surveillance network and cancel or ban every source of misinformation before it spreads. This might have worked a few centuries ago but not in this day and age. Instead we propose a more inclusive option: embrace all voices and channel them into an actual, prospective earthquake prediction platform (Kamer et al. 2020). The platform is powered by a global state-of-the-art statistical earthquake forecasting model that provides near real-time earthquake occurrence probabilities anywhere on the globe (Nandan et al. 2020). Using this model as a benchmark in statistical metrics specifically tailored to the prediction problem, we are able to distill all these voices and quantify the essence of predictive skill. This approach has several advantages. Rather than trying to silence or denounce, we listen and evaluate each claim and report the predictive skill of the source. We engage the public and allow them to take part in a scientific experiment that will increase their risk awareness. We effectively demonstrate that anybody with an internet connected device can make an earthquake prediction, but that it is not so trivial to achieve skillful predictive performance.&lt;/p&gt;&lt;p&gt;Here we shall present initial results from our global earthquake prediction experiment that we have been conducting on www.richterx.com for the past two years, yielding more than 10,000 predictions. These results will hopefully demystify the act of predicting an earthquake in the eyes of the public, and next time someone forwards a prediction message it would arouse more scrutiny than panic or distaste.&lt;br&gt;&lt;br&gt;Nandan, S., Kamer, Y., Ouillon, G., Hiemer, S., Sornette, D. (2020). &lt;em&gt;Global models for short-term earthquake forecasting and predictive skill assessment&lt;/em&gt;. European Physical Journal ST. doi: 10.1140/epjst/e2020-000259-3&lt;br&gt;Kamer, Y., Nandan, S., Ouillon, G., Hiemer, S., Sornette, D. (2020). &lt;em&gt;Democratizing earthquake predictability research: introducing the RichterX platform.&lt;/em&gt; European Physical Journal ST. doi: 10.1140/epjst/e2020-000260-2&amp;#160;&lt;/p&gt;


Author(s):  
Giovanni Costa ◽  
Antonella Peresan ◽  
Ivanka Orozova ◽  
Giuliano Francesco Panza ◽  
Irina M. Rotwain

1997 ◽  
Vol 74 (4) ◽  
pp. 797-813 ◽  
Author(s):  
Ann M. Major ◽  
L. Erwin Atwood

This study examines public response to and perceived believability of information disseminated in the news media about a real-time earthquake prediction, and extends the body of media credibility research by examining these responses within the context of Taylor's (1983) cognitive adaptation theory. The theory focuses on people's illusions of well-being that under certain circumstances of threat can lead to adaptive behaviors and provides insights into why some people increased their assessments of message credibility while others lowered their evaluations; still others made no change over time in their assessments of message believability.


2001 ◽  
Vol 38 (A) ◽  
pp. 222-231 ◽  
Author(s):  
Yaolin Shi ◽  
Jie Liu ◽  
Guomin Zhang

The annual earthquake predictions of the China Seismological Bureau (CSB) are evaluated by means of an R score (an R score is approximately 0 for completely random guesses, and approximately 1 for completely successful predictions). The average R score of the annual predictions in China in the period 1990–1998 is about 0.184, significantly larger than 0.0. However, background seismicity is higher in seismically active regions. If a ‘random guess' prediction is chosen to be proportional to the background seismicity, the expected R score is 0.123, and the nine-year mean R score of 0.184 as observed is only marginally higher than this background value. Monte Carlo tests indicate that the probability of attaining an R score of actual prediction by background seismicity based on random guess is about . It is concluded that earthquake prediction in China is still in a very preliminary stage, barely above a pure chance level.


Author(s):  
N. R. Britton

Proceedings of the seminar on the social and economic effects of earthquake prediction, 12 October, 1977.


2019 ◽  
Vol 23 (4) ◽  
pp. 309-315 ◽  
Author(s):  
Edgar Tapia-Hernández ◽  
Elizabeth A. Reddy ◽  
Laura Josabeth Oros-Aviles

Supporting earthquake risk management with clear seismic communication may necessitate encounters with various popular misapprehensions regarding earthquake prediction. Drawing on technical data as well as insights from anthropology and economics, this paper addresses common and scientifically-unsupported ideas about earthquake prediction, as well as the state of science-based studies regarding statistical forecasting and physical precursors. The authors reflect on documented social and economic effects of unsubstantiated earthquake predictions, and argue that these may be dangerous but may also present certain opportunities for outreach and education in formal and informal settings. This paper is written in light of the importance that the United Nations Office for Disaster Risk Reduction has placed on coordination and communication within and among diverse organizations and agencies as well as by recent popularity of so-called earthquake prediction in Mexico.


Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 859 ◽  
Author(s):  
Peng Han ◽  
Jiancang Zhuang ◽  
Katsumi Hattori ◽  
Chieh-Hung Chen ◽  
Febty Febriani ◽  
...  

In order to clarify ultra-low-frequency (ULF) seismomagnetic phenomena, a sensitive geomagnetic network was installed in Kanto, Japan since 2000. In previous studies, we have verified the correlation between ULF magnetic anomalies and local sizeable earthquakes. In this study, we use Molchan’s error diagram to evaluate the potential earthquake precursory information in the magnetic data recorded in Kanto, Japan during 2000–2010. We introduce the probability gain (PG′) and the probability difference (D′) to quantify the forecasting performance and to explore the optimal prediction parameters for a given ULF magnetic station. The results show that the earthquake predictions based on magnetic anomalies are significantly better than random guesses, indicating the magnetic data contain potential useful precursory information. Further investigations suggest that the prediction performance depends on the choices of the distance (R) and size of the target earthquake events (Es). Optimal R and Es are about (100 km, 108.75) and (180 km, 108.75) for Seikoshi (SKS) station in Izu and Kiyosumi (KYS) station in Boso, respectively.


2021 ◽  
Author(s):  
Jose A. Bayona ◽  
William Savran ◽  
Maximilian Werner ◽  
David A. Rhoades

&lt;p&gt;Developing testable seismicity models is essential for robust seismic hazard assessments and to quantify the predictive skills of posited hypotheses about seismogenesis. On this premise, the Regional Earthquake Likelihood Models (RELM) group designed a joint forecasting experiment, with associated models, data and tests to evaluate earthquake predictability in California over a five-year period. Participating RELM forecast models were based on a range of geophysical datasets, including earthquake catalogs, interseismic strain rates, and geologic fault slip rates. After five years of prospective evaluation, the RELM experiment found that the smoothed seismicity (HKJ) model by Helmstetter et al. (2007) was the most informative. The diversity of competing forecast hypotheses in RELM was suitable for combining multiple models that could provide more informative earthquake forecasts than HKJ. Thus, Rhoades et al. (2014) created multiplicative hybrid models that involve the HKJ model as a baseline and one or more conjugate models. Particularly, the authors fitted two parameters for each conjugate model and an overall normalizing constant to optimize each hybrid model. Then, information gain scores per earthquake were computed using a corrected Akaike Information Criterion that penalized for the number of fitted parameters. According to retrospective analyses, some hybrid models showed significant information gains over the HKJ forecast, despite the penalty. Here, we assess in a prospective setting the predictive skills of 16 hybrids and 6 original RELM forecasts, using a suite of tests of the Collaboratory for the Study of Earthquake Predicitability (CSEP). The evaluation dataset contains 40 M&amp;#8805;4.95 events recorded within the California CSEP-testing region from 1 January 2011 to 31 December 2020, including the 2016 Mw 5.6, 5.6, and 5.5 Hawthorne earthquake swarm, and the Mw 6.4 foreshock and Mw 7.1 mainshock from the 2019 Ridgecrest sequence. We evaluate the consistency between the observed and the expected number, spatial, likelihood and magnitude distributions of earthquakes, and compare the performance of each forecast to that of HKJ. Our prospective test results show that none of the hybrid models are significantly more informative than the HKJ baseline forecast. These results are mainly due to the occurrence of the 2016 Hawthorne earthquake cluster, and four events from the 2019 Ridgecrest sequence in two forecast bins. These clusters of seismicity are exceptionally unlikely in all models, and insufficiently captured by the Poisson distribution that the likelihood functions of tests assume. Therefore, we are currently examining alternative likelihood functions that reduce the sensitivity of the evaluations to clustering, and that could be used to better understand whether the discrepancies between prospective and retrospective test results for multiplicative hybrid forecasts are due to limitations of the tests or the methods used to create the hybrid models.&amp;#160;&lt;/p&gt;


2001 ◽  
Vol 38 (A) ◽  
pp. 222-231 ◽  
Author(s):  
Yaolin Shi ◽  
Jie Liu ◽  
Guomin Zhang

The annual earthquake predictions of the China Seismological Bureau (CSB) are evaluated by means of an R score (an R score is approximately 0 for completely random guesses, and approximately 1 for completely successful predictions). The average R score of the annual predictions in China in the period 1990–1998 is about 0.184, significantly larger than 0.0. However, background seismicity is higher in seismically active regions. If a ‘random guess' prediction is chosen to be proportional to the background seismicity, the expected R score is 0.123, and the nine-year mean R score of 0.184 as observed is only marginally higher than this background value. Monte Carlo tests indicate that the probability of attaining an R score of actual prediction by background seismicity based on random guess is about . It is concluded that earthquake prediction in China is still in a very preliminary stage, barely above a pure chance level.


Sign in / Sign up

Export Citation Format

Share Document