Rupture Characteristics of the 25 November 2016 Aketao Earthquake (Mw 6.6) in Eastern Pamir Revealed by GPS and Teleseismic Data

Author(s):  
Jie Li ◽  
Gang Liu ◽  
Xuejun Qiao ◽  
Wei Xiong ◽  
Xiaoqiang wang ◽  
...  
Keyword(s):  
2012 ◽  
Vol 170 (3) ◽  
pp. 391-407 ◽  
Author(s):  
Boi-Yee Liao ◽  
Tian-Wei Sheu ◽  
Yeong-Tien Yeh ◽  
Huey-Chu Huang ◽  
Lien-Shiang Yang

1982 ◽  
Vol 72 (1) ◽  
pp. 93-111
Author(s):  
R. E. Habermann

abstract Changes in the rate of occurrence of smaller events have been recognized in the rupture zones of upcoming large earthquakes in several postearthquake and one preearthquake study. A data set in which a constant portion of the events in any magnitude band are consistently reported through time is crucial for the recognition of seismicity rate changes which are real (related to some process change in the earth). Such a data set is termed a homogeneous data set. The consistency of reporting of earthquakes in the NOAA Hypocenter Data File (HDF) since 1963 is evaluated by examining the cumulative number of events reported as a function of time for the entire world in eight magnitude bands. It is assumed that the rate of occurrence of events in the entire world is roughly constant on the time scale examined here because of the great size of the worldwide earthquake production system. The rate of reporting of events with magnitudes above mb = 4.5 has been constant or increasing since 1963. Significant decreases in the number of events reported per month in the magnitude bands below mb = 4.4 occurred during 1968 and 1976. These decreases are interpreted as indications of decreases in detection of events for two reasons. First, they occur at times of constant rates of occurrence and reporting of larger events. Second, the decrease during the late 1960's has also been recognized in the teleseismic data reported by the International Seismological Centre (ISC). This suggests that the decrease in the number of small events reported was related to facets of the earthquake reporting system which the ISC and NOAA share. The most obvious candidate is the detection system. During 1968, detection decreased in the United States, Central and South America, and portions of the South Pacific. This decrease is probably due to the closure of the VELA arrays, BMO, TFO, CPO, UBO, and WMO. During 1976, detection decreased in most of the seismically active regions of the western hemisphere, as well as in the region between Kamchatka and Guam. The cause of this detection decrease is unclear. These detection decreases seriously affect the amount of homogeneous background period available for the study of teleseismic seismicity rate changes. If events below the minimum magnitude of homogeneity are eliminated from the teleseismic data sets the resulting small numbers of events render many regions unsuitable for study. Many authors have reported seismicity rate decreases as possible precursors to great earthquakes. Few of these authors have considered detection decreases as possible explanations for their results. This analysis indicates that such considerations cannot be avoided in studies of teleseismic data.


1994 ◽  
Vol 37 (1) ◽  
Author(s):  
F. Maggio ◽  
F. Malfanti ◽  
M. Bertero ◽  
M. Cattaneo ◽  
C. Eva

In this paper we apply various inversion methods to a set of teleseismic data collected by a network operating along the Ligurian Belt in the transition region between Alps and Apennines. In particular, we consider the regularization method, the truncated singular value decomposition, the Landweber method (with the Related Simultaneous Iterative Reconstruction Technique) and the conjugate gradient method. All the methods provide rather similar velocity models which are well approximated by that provided by back-projection (used with an appropriate normalization constant). A drawback of these models seems to be the large discrepancy (of the order of 40%) between the observed time residuals and those computed from the model itself. However, for each station of the network, the azimuth dependence of the computed time residuals reproduces rather well the observed one so that it is believable that the most significant information contained in the data has been expIoited. The computed velocity models indicate strong heterogeneities in the first 200 km below the Apennines.


1989 ◽  
Vol 79 (2) ◽  
pp. 500-514 ◽  
Author(s):  
Allison L. Bent ◽  
Donald V. Helmberger ◽  
Richard J. Stead ◽  
Phyllis Ho-Liu

Abstract Long-period body-wave data recorded at teleseismic distances and strong-motion data at Pasadena for the Superstition Hills earthquakes of 24 November 1987 are modeled to obtain the source parameters. We will refer to the event that occurred at 0153 UT as EQ1 and the event at 1316 UT as EQ2. At all distances the first earthquake appears to be a simple left-lateral strike-slip event on a fault striking NE. It is a relatively deep event with a source depth of 10 km. It has a teleseismic moment of 2.7 ×1025 dyne cm. The second and more complex event was modeled in two ways: by using EQ1 as the Green's function and by using a more traditional forward modeling technique to create synthetic seismograms. The first method indicated that EQ2 was a double event with both subevents similar, but not identical to EQ1 and separated by about 7.5 sec. From the synthetic seismogram study we obtained a strike of 305° for the first subevent and 320° for the second. Both have dips of 80° and rakes of 175°. The first subevent has a moment of 3.6 ×1025 which is half that of the second. We obtain depths of at least 6 km. The teleseismic data indicate a preferred subevent separation of 30 km with the second almost due south of the first, but the error bounds are substantial. This would suggest that the subevents occurred on conjugate faults. The strong-motion data at PAS, however, imply a much smaller source separation, with the sources probably produced by asperities.


1993 ◽  
Vol 20 (3) ◽  
pp. 241-244 ◽  
Author(s):  
Timothy J. Clarke ◽  
Paul G. Silver

Sign in / Sign up

Export Citation Format

Share Document