Inadvertent changes in magnitude reported in earthquake catalogs: Their evaluation through b-value estimates

1995 ◽  
Vol 85 (6) ◽  
pp. 1858-1866
Author(s):  
F. Ramón Zúñiga ◽  
Max Wyss

Abstract A simple procedure is presented for analyzing magnitudes and seismicity rates reported in earthquake catalogs in order to discriminate between inadvertently introduced changes in magnitude and real seismicity changes. We assume that the rate and the frequency-magnitude relation of the independent background seismicity do not change with time. Observed differences in the frequency-magnitude relation (a and b values) between data from two periods are modeled as due to a transformation of the magnitude scale. The transformation equation is found by a least-squares-fitting process based on the seismicity data for earthquakes large enough to be reported completely and by comparing the linear relation of one period to the other. For smaller events, an additional factor accounting for increased (decreased) detection is allowed. This fitting technique is tested on a data set from Parkfield for which two types of magnitudes, amplitude and duration, were computed for each earthquake. We found that the b-value fitting technique yielded virtually the same result as a linear regression assuming the same errors in the two magnitudes. The technique is also applied to interpret the nature of reporting rate changes in a local (Guerrero, Mexico) and a regional (Italy) earthquake catalog. In Guerrero, a magnitude change in 1991.37 can be modeled about equally well by Mnew = Mold + 0.5 or by Mnew = 1.02 Mold + 0.38, but residuals with the latter transformation are smaller. In Italy, a magnitude change in 1980.21 cannot be modeled satisfactorily by a simple magnitude shift but is well described by a compression of the magnitude scale given by Mnew = 0.67 Mold + 1.03. The proposed b-slope fitting method provides a means to interpret quantitatively, and in some cases correct for, artificial reporting rate changes in earthquake catalogs.

Author(s):  
Jeremy Maurer ◽  
Deborah Kane ◽  
Marleen Nyst ◽  
Jessica Velasquez

ABSTRACT The U.S. Geological Survey (USGS) has for each year 2016–2018 released a one-year seismic hazard map for the central and eastern United States (CEUS) to address the problem of induced and triggered seismicity (ITS) in the region. ITS in areas with historically low rates of earthquakes provides both challenges and opportunities to learn about crustal conditions, but few scientific studies have considered the financial risk implications of damage caused by ITS. We directly address this issue by modeling earthquake risk in the CEUS using the 1 yr hazard model from the USGS and the RiskLink software package developed by Risk Management Solutions, Inc. We explore the sensitivity of risk to declustering and b-value, and consider whether declustering methods developed for tectonic earthquakes are suitable for ITS. In particular, the Gardner and Knopoff (1974) declustering algorithm has been used in every USGS hazard forecast, including the recent 1 yr forecasts, but leads to the counterintuitive result that earthquake risk in Oklahoma is at its highest level in 2018, even though there were one-fifth as many earthquakes as occurred in 2016. Our analysis shows that this is a result of (1) the peculiar characteristics of the declustering algorithm with space-varying and time-varying seismicity rates, (2) the fact that the frequency–magnitude distribution of earthquakes in Oklahoma is not well described by a single b-value, and (3) at later times, seismicity is more spatially diffuse and seismicity rate increases are closer to more populated areas. ITS in Oklahoma may include a combination of swarm-like events with tectonic-style events, which have different frequency–magnitude and aftershock distributions. New algorithms for hazard estimation need to be developed to account for these unique characteristics of ITS.


2009 ◽  
Vol 9 (3) ◽  
pp. 905-912 ◽  
Author(s):  
G. Chouliaras

Abstract. The earthquake catalog of the National Observatory of Athens (NOA) since the beginning of the Greek National Seismological Network development in 1964, is compiled and analyzed in this study. The b-value and the spatial and temporal variability of the magnitude of completeness of the catalog is determined together with the times of significant seismicity rate changes. It is well known that man made inhomogeneities and artifacts exist in earthquake catalogs that are produced by changing seismological networks and in this study the chronological order of periods of network expansion, instrumental upgrades and practice and procedures changes at NOA are reported. The earthquake catalog of NOA is the most detailed data set available for the Greek area and the results of this study may be employed for the selection of trustworthy parts of the data in earthquake prediction research.


Author(s):  
Leila Mizrahi ◽  
Shyam Nandan ◽  
Stefan Wiemer

Abstract Declustering aims to divide earthquake catalogs into independent events (mainshocks), and dependent (clustered) events, and is an integral component of many seismicity studies, including seismic hazard assessment. We assess the effect of declustering on the frequency–magnitude distribution of mainshocks. In particular, we examine the dependence of the b-value of declustered catalogs on the choice of declustering approach and algorithm-specific parameters. Using the catalog of earthquakes in California since 1980, we show that the b-value decreases by up to 30% due to declustering with respect to the undeclustered catalog. The extent of the reduction is highly dependent on the declustering method and parameters applied. We then reproduce a similar effect by declustering synthetic earthquake catalogs with known b-value, which have been generated using an epidemic-type aftershock sequence model. Our analysis suggests that the observed decrease in b-value must, at least partially, arise from the application of the declustering algorithm on the catalog, rather than from differences in the nature of mainshocks versus fore- or aftershocks. We conclude that declustering should be considered as a potential source of bias in seismicity and hazard studies.


Author(s):  
Laura Gulia ◽  
Paolo Gasperini

Abstract Artifacts often affect seismic catalogs. Among them, the presence of man-made contaminations such as quarry blasts and explosions is a well-known problem. Using a contaminated dataset reduces the statistical significance of results and can lead to erroneous conclusions, thus the removal of such nonnatural events should be the first step for a data analyst. Blasts misclassified as natural earthquakes, indeed, may artificially alter the seismicity rates and then the b-value of the Gutenberg and Richter relationship, an essential ingredient of several forecasting models. At present, datasets collect useful information beyond the parameters to locate the earthquakes in space and time, allowing the users to discriminate between natural and nonnatural events. However, selecting them from webservices queries is neither easy nor clear, and part of such supplementary but fundamental information can be lost during downloading. As a consequence, most of statistical seismologists ignore the presence in seismic catalog of explosions and quarry blasts and assume that they were not located by seismic networks or in case they were eliminated. We here show the example of the Italian Seismological Instrumental and Parametric Database. What happens when artificial seismicity is mixed with natural one?


2020 ◽  
Vol 501 (2) ◽  
pp. 1663-1676
Author(s):  
R Barnett ◽  
S J Warren ◽  
N J G Cross ◽  
D J Mortlock ◽  
X Fan ◽  
...  

ABSTRACT We present the results of a new, deeper, and complete search for high-redshift 6.5 < z < 9.3 quasars over 977 deg2 of the VISTA Kilo-Degree Infrared Galaxy (VIKING) survey. This exploits a new list-driven data set providing photometry in all bands Z, Y, J, H, Ks, for all sources detected by VIKING in J. We use the Bayesian model comparison (BMC) selection method of Mortlock et al., producing a ranked list of just 21 candidates. The sources ranked 1, 2, 3, and 5 are the four known z > 6.5 quasars in this field. Additional observations of the other 17 candidates, primarily DESI Legacy Survey photometry and ESO FORS2 spectroscopy, confirm that none is a quasar. This is the first complete sample from the VIKING survey, and we provide the computed selection function. We include a detailed comparison of the BMC method against two other selection methods: colour cuts and minimum-χ2 SED fitting. We find that: (i) BMC produces eight times fewer false positives than colour cuts, while also reaching 0.3 mag deeper, (ii) the minimum-χ2 SED-fitting method is extremely efficient but reaches 0.7 mag less deep than the BMC method, and selects only one of the four known quasars. We show that BMC candidates, rejected because their photometric SEDs have high χ2 values, include bright examples of galaxies with very strong [O iii] λλ4959,5007 emission in the Y band, identified in fainter surveys by Matsuoka et al. This is a potential contaminant population in Euclid searches for faint z > 7 quasars, not previously accounted for, and that requires better characterization.


1983 ◽  
Vol 73 (1) ◽  
pp. 219-236
Author(s):  
M. Wyss ◽  
R. E. Habermann ◽  
Ch. Heiniger

abstract The rate of occurrence of earthquakes shallower than 100 km during the years 1963 to 1980 was studied as a function of time and space along the New Hebrides island arc. Systematic examination of the seismicity rates for different magnitude bands showed that events with mb < 4.8 were not reported consistently over time. The seismicity rate as defined by mb ≧ 4.8 events was examined quantitatively and systematically in the source volumes of three recent main shocks and within two seismic gaps. A clear case of seismic quiescence could be shown to have existed before one of the large main shocks if a major asperity was excluded from the volume studied. The 1980 Ms = 8 rupture in the northern New Hebrides was preceded by a pattern of 9 to 12 yr of quiescence followed by 5 yr of normal rate. This pattern does not conform to the hypothesis that quiescence lasts up to the mainshock which it precedes. The 1980 rupture also did not fully conform to the gap hypothesis: half of its aftershock area covered part of a great rupture which occurred in 1966. A major asperity seemed to play a critical role in the 1966 and 1980 great ruptures: it stopped the 1966 rupture, and both parts of the 1980 double rupture initiated from it. In addition, this major asperity made itself known by a seismicity rate and stress drops higher than in the surrounding areas. Stress drops of 272 earthquakes were estimated by the MS/mb method. Time dependence of stress drops could not be studied because of changes in the world data set of Ms and mb values. Areas of high stress drops did not correlate in general with areas of high seismicity rate. Instead, outstandingly high average stress drops were observed in two plate boundary segments with average seismicity rate where ocean floor ridges are being subducted. The seismic gaps of the central and northern New Hebrides each contain seismically quiet regions. In the central New Hebrides, the 50 to 100 km of the plate boundary near 18.5°S showed an extremely low seismicity rate during the entire observation period. Low seismicity could be a permanent property of this location. In the northern New Hebrides gap, seismic quiescence started in mid-1972, except in a central volume where high stress drops are observed. This volume is interpreted as an asperity, and the quiescence may be interpreted as part of the preparation process to a future large main shock near 13.5°S.


1992 ◽  
Vol 82 (3) ◽  
pp. 1306-1349 ◽  
Author(s):  
Javier F. Pacheco ◽  
Lynn R. Sykes

Abstract We compile a worldwide catalog of shallow (depth < 70 km) and large (Ms ≥ 7) earthquakes recorded between 1900 and 1989. The catalog is shown to be complete and uniform at the 20-sec surface-wave magnitude Ms ≥ 7.0. We base our catalog on those of Abe (1981, 1984) and Abe and Noguchi (1983a, b) for events with Ms ≥ 7.0. Those catalogs, however, are not homogeneous in seismicity rates for the entire 90-year period. We assume that global rates of seismicity are constant on a time scale of decades and most inhomogeneities arise from changes in instrumentation and/or reporting. We correct the magnitudes to produce a homogeneous catalog. The catalog is accompanied by a reference list for all the events with seismic moment determined at periods longer than 20 sec. Using these seismic moments for great and giant earthquakes and a moment-magnitude relationship for smaller events, we produce a seismic moment catalog for large earthquakes from 1900 to 1989. The catalog is used to study the distribution of moment released worldwide. Although we assumed a constant rate of seismicity on a global basis, the rate of moment release has not been constant for the 90-year period because the latter is dominated by the few largest earthquakes. We find that the seismic moment released at subduction zones during this century constitutes 90% of all the moment released by large, shallow earthquakes on a global basis. The seismic moment released in the largest event that occurred during this century, the 1960 southern Chile earthquake, represents about 30 to 45% of the total moment released from 1900 through 1989. A frequency-size distribution of earthquakes with seismic moment yields an average slope (b value) that changes from 1.04 for magnitudes between 7.0 and 7.5 to b = 1.51 for magnitudes between 7.6 and 8.0. This change in the b value is attributed to different scaling relationships between bounded (large) and unbounded (small) earthquakes. Thus, the earthquake process does have a characteristic length scale that is set by the downdip width over which rupture in earthquakes can occur. That width is typically greater for thrust events at subduction zones than for earthquakes along transform faults and other tectonic environments.


Author(s):  
Sarah Azar ◽  
Mayssa Dabaghi

ABSTRACT The use of numerical simulations in probabilistic seismic hazard analysis (PSHA) has achieved a promising level of reliability in recent years. One example is the CyberShake project, which incorporates physics-based 3D ground-motion simulations within seismic hazard calculations. Nonetheless, considerable computational time and resources are required due to the significant processing requirements imposed by source-based models on one hand, and the large number of seismic sources and possible rupture variations on the other. This article proposes to use a less computationally demanding simulation-based PSHA framework for CyberShake. The framework can accurately represent the seismic hazard at a site, by only considering a subset of all the possible earthquake scenarios, based on a Monte-Carlo simulation procedure that generates earthquake catalogs having a specified duration. In this case, ground motions need only be simulated for the scenarios selected in the earthquake catalog, and hazard calculations are limited to this subset of scenarios. To validate the method and evaluate its accuracy in the CyberShake platform, the proposed framework is applied to three sites in southern California, and hazard calculations are performed for earthquake catalogs with different lengths. The resulting hazard curves are then benchmarked against those obtained by considering the entire set of earthquake scenarios and simulations, as done in CyberShake. Both approaches yield similar estimates of the hazard curves for elastic pseudospectral accelerations and inelastic demands, with errors that depend on the length of the Monte-Carlo catalog. With 200,000 yr catalogs, the errors are consistently smaller than 5% at the 2% probability of exceedance in 50 yr hazard level, using only ∼3% of the entire set of simulations. Both approaches also produce similar disaggregation patterns. The results demonstrate the potential of the proposed approach in a simulation-based PSHA platform like CyberShake and as a ground-motion selection tool for seismic demand analyses.


Author(s):  
Nicolas D. DeSalvio ◽  
Maxwell L. Rudolph

Abstract Earthquake precursors have long been sought as a means to predict earthquakes with very limited success. Recently, it has been suggested that a decrease in the Gutenberg–Richter b-value after a magnitude 6 earthquake is predictive of an imminent mainshock of larger magnitude, and a three-level traffic-light system has been proposed. However, this method is dependent on parameters that must be chosen by an expert. We systematically explore the parameter space to find an optimal set of parameters based on the Matthews correlation coefficient. For each parameter combination, we analyze the temporal changes in the frequency–magnitude distribution for every M ≥ 6 earthquake sequence in the U.S. Geological Survey Comprehensive Earthquake Catalog for western North America. We then consider smaller events, those with a foreshock magnitude as small as 5, and repeat the analysis to assess its performance for events that modify stresses over smaller spatial regions. We analyze 25 M ≥ 6 events and 88 M 5–6 events. We find that no perfect parameter combination exists. Although the method generates correct retrodictions for some M 5 events, the predictions are dependent on the retrospectively selected parameters. About 80%–95% of magnitude 5–6 events have too little data to generate a result. Predictions are time dependent and have large uncertainties. Without a precise definition of precursory b-value changes, this and similar prediction schemes are incompatible with the IASPEI criteria for evaluating earthquake precursors. If limitations on measuring precursory changes in seismicity and relating them to the state of stress in the crust can be overcome, real-time forecasting of mainshocks could reduce the loss of lives.


Sign in / Sign up

Export Citation Format

Share Document