scholarly journals Likelihood- and residual-based evaluation of medium-term earthquake forecast models for California

2014 ◽  
Vol 198 (3) ◽  
pp. 1307-1318 ◽  
Author(s):  
M. Schneider ◽  
R. Clements ◽  
D. Rhoades ◽  
D. Schorlemmer
2012 ◽  
Vol 2012 ◽  
pp. 1-6 ◽  
Author(s):  
Yiqing Zhu ◽  
F. Benjamin Zhan

Gravity changes derived from regional gravity monitoring data in China from 1998 to 2005 exhibited noticeable variations before the occurrence of two large earthquakes in 2008 in China—the 2008 Yutian (Xinjiang)Ms=7.3earthquake and the 2008 Wenchuan (Sichuan)Ms=8.0earthquake. Based on these gravity variations, a group of researchers at the Second Crust Monitoring and Application Center of China Earthquake Administration made a suggestion in December of 2006 that the possibility for the Yutian (Xinjiang) and Wenchuan (Sichuan) areas to experience a large earthquake in either 2007 or 2008 was high. We review the gravity monitoring data and methods upon which the researchers reached these medium-term earthquake forecasts. Experience related to the medium-term forecasts of the Yutian and Wenchuan earthquakes suggests that gravity changes derived from regional gravity monitoring data could potentially be a useful medium-term precursor of large earthquakes, but significant additional research is needed to validate and evaluate this hypothesis.


2017 ◽  
Vol 211 (1) ◽  
pp. 239-251 ◽  
Author(s):  
Anne Strader ◽  
Max Schneider ◽  
Danijel Schorlemmer

2021 ◽  
Author(s):  
Jose A. Bayona ◽  
William Savran ◽  
Maximilian Werner ◽  
David A. Rhoades

<p>Developing testable seismicity models is essential for robust seismic hazard assessments and to quantify the predictive skills of posited hypotheses about seismogenesis. On this premise, the Regional Earthquake Likelihood Models (RELM) group designed a joint forecasting experiment, with associated models, data and tests to evaluate earthquake predictability in California over a five-year period. Participating RELM forecast models were based on a range of geophysical datasets, including earthquake catalogs, interseismic strain rates, and geologic fault slip rates. After five years of prospective evaluation, the RELM experiment found that the smoothed seismicity (HKJ) model by Helmstetter et al. (2007) was the most informative. The diversity of competing forecast hypotheses in RELM was suitable for combining multiple models that could provide more informative earthquake forecasts than HKJ. Thus, Rhoades et al. (2014) created multiplicative hybrid models that involve the HKJ model as a baseline and one or more conjugate models. Particularly, the authors fitted two parameters for each conjugate model and an overall normalizing constant to optimize each hybrid model. Then, information gain scores per earthquake were computed using a corrected Akaike Information Criterion that penalized for the number of fitted parameters. According to retrospective analyses, some hybrid models showed significant information gains over the HKJ forecast, despite the penalty. Here, we assess in a prospective setting the predictive skills of 16 hybrids and 6 original RELM forecasts, using a suite of tests of the Collaboratory for the Study of Earthquake Predicitability (CSEP). The evaluation dataset contains 40 M≥4.95 events recorded within the California CSEP-testing region from 1 January 2011 to 31 December 2020, including the 2016 Mw 5.6, 5.6, and 5.5 Hawthorne earthquake swarm, and the Mw 6.4 foreshock and Mw 7.1 mainshock from the 2019 Ridgecrest sequence. We evaluate the consistency between the observed and the expected number, spatial, likelihood and magnitude distributions of earthquakes, and compare the performance of each forecast to that of HKJ. Our prospective test results show that none of the hybrid models are significantly more informative than the HKJ baseline forecast. These results are mainly due to the occurrence of the 2016 Hawthorne earthquake cluster, and four events from the 2019 Ridgecrest sequence in two forecast bins. These clusters of seismicity are exceptionally unlikely in all models, and insufficiently captured by the Poisson distribution that the likelihood functions of tests assume. Therefore, we are currently examining alternative likelihood functions that reduce the sensitivity of the evaluations to clustering, and that could be used to better understand whether the discrepancies between prospective and retrospective test results for multiplicative hybrid forecasts are due to limitations of the tests or the methods used to create the hybrid models. </p>


2012 ◽  
Vol 2 (1) ◽  
pp. 2 ◽  
Author(s):  
Christine Smyth ◽  
Masumi Yamada ◽  
Jim Mori

The Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project aimed at testing earthquake forecast models in a fair environment. Various metrics are currently used to evaluate the submitted forecasts. However, the CSEP still lacks easily understandable metrics with which to rank the universal performance of the forecast models. In this research, we modify a well-known and respected metric from another statistical field, bioinformatics, to make it suitable for evaluating earthquake forecasts, such as those submitted to the CSEP initiative. The metric, originally called a <em>gene-set enrichment score</em>, is based on a Kolmogorov-Smirnov statistic. Our modified metric assesses if, over a certain time period, the forecast values at locations where earthquakes have occurred are significantly increased compared to the values for all locations where earthquakes did not occur. Permutation testing allows for a significance value to be placed upon the score. Unlike the metrics currently employed by the CSEP, the score places no assumption on the distribution of earthquake occurrence nor requires an arbitrary reference forecast. In this research, we apply the modified metric to simulated data and real forecast data to show it is a powerful and robust technique, capable of ranking competing earthquake forecasts.


Sign in / Sign up

Export Citation Format

Share Document