A physical basis for earthquakes based on the elastic rebound model

1976 ◽  
Vol 66 (2) ◽  
pp. 433-451
Author(s):  
Mitiyasu Ohnaka

abstract The elastic rebound model explaining seismological data quantitatively is derived by developing the original elastic rebound theory proposed by H. F. Reid. Assuming that the dislocation front propagates in one direction along the long axis of the fault plane, the shear strain drop Δɛ, the earthquake volume V, the stiffness of the fault, the mass of inertia, and the seismic energy radiated Es are evaluated in terms of the fault-plane dimensions, the dislocation D, the propagating velocity of dislocation v, and the Shear-wave velocity. The elastic strain energy released is evaluated in terms of V, Δɛ, and the initial shear strain. It is shown that the order of magnitude of Es is virtually given by μWD2, where μ is the rigidity and W is the fault width. The order of magnitude of the initial slip acceleration is estimated by making use of the formula derived in a previous paper. The moment of the elastic rebound force is calculated. The maximum amplitude of the far-field wave motion is in proportion to vM0/L, where M0 is the seismic moment and L is the fault length: this predicts that log (M0/L) is linearly related to the magnitude M, if v is assumed to be almost constant for actual earthquakes. The good linear relation, log (M0/L) = 1.2M + 11.7 (M0/L in dynes), is found empirically over a wide range of M (2 ≦ M ≦ 8.5). The directly proportional relationship between the logarithm of seismic moment per unit area and the magnitude seems to hold empirically.

Author(s):  
M. A. Sharifi ◽  
A. Bahroudi ◽  
S. Mafi

Abstract. In this study, we investigate the contribution of earthquakes to the deformation of Zagros province and compare the seismicity and the density of earthquakes in different parts of the province. The mathematics used in this research is based on calculations of moment rates. The seismic moment rate is the average amount of seismic energy releases from the tectonic province in each year. The geodetic moment rate is the average amount of energy which is consumed every year to make deformation in Zagros. The ratio of these two moment rates expresses the contribution of earthquakes in making deformation in Zagros province. According to the calculations, this ratio is estimated to be 13.06%. Along with the information obtained from the moment rates, we can also obtain the shear and the dilative strain rates from the strain rate tensors, which show the volumetric changes and the deformation rate in different parts of the Zagros, respectively. The data used in this study include the focal coordinates of the Zagros earthquakes with their magnitude and the velocity vectors of the Zagros geodynamic network, which are used to calculate the seismic and the geodetic moment rates.


TAPPI Journal ◽  
2018 ◽  
Vol 17 (04) ◽  
pp. 231-240
Author(s):  
Douglas Coffin ◽  
Joel Panek

A transverse shear strain was utilized to characterize the severity of creasing for a wide range of tooling configurations. An analytic expression of transverse shear strain, which accounts for tooling geometry, correlated well with relative crease strength and springback as determined from 90° fold tests. The experimental results show a minimum strain (elastic limit) that needs to be exceeded for the relative crease strength to be reduced. The theory predicts a maximum achievable transverse shear strain, which is further limited if the tooling clearance is negative. The elastic limit and maximum strain thus describe the range of interest for effective creasing. In this range, cross direction (CD)-creased samples were more sensitive to creasing than machine direction (MD)-creased samples, but the differences were reduced as the shear strain approached the maximum. The presented development provides the foundation for a quantitative engineering approach to creasing and folding operations.


2020 ◽  
pp. 431-449
Author(s):  
Oleg V. Shekatunov ◽  
Konstantin G. Malykhin

The article is devoted to the specifics of studying the industrial labour force of Russia in the 1920s - 1930s in Russian historiography. The various stages of study from the 1920s through the 1930s and up to the last years are concerned. The relevance of the study is due to several factors. These include contradictions in the assessments of Bolshevik modernization of the 1920s and 1930s; projected labour force shortages in modern Russia; as well as the existing labour force shortage in industry at the moment. This determines the relevance of studying the historical period, which was characterized by the most acute personnel problems in the country. The novelty of the study is due to the fact that in modern Russian historiography there is no holistic, integrated view of the problems of the labour force potential formation of Russian industry in the 1920s and 1930s. It is noted that there is no research aimed at analyzing the historiography of these problems. The main stages of the study of industrial labour force are highlighted. The analysis of scientific works correlated with each stage of the study of the topic is performed. The problems and methodology of each stage are considered. A review of a wide range of scientific papers both articles and thesis is presented.


2016 ◽  
Vol 12 (2) ◽  
pp. 4255-4259
Author(s):  
Michael A Persinger ◽  
David A Vares ◽  
Paula L Corradini

                The human brain was assumed to be an elliptical electric dipole. Repeated quantitative electroencephalographic measurements over several weeks were completed for a single subject who sat in either a magnetic eastward or magnetic southward direction. The predicted potential difference equivalence for the torque while facing perpendicular (west-to-east) to the northward component of the geomagnetic field (relative to facing south) was 4 μV. The actual measurement was 10 μV. The oscillation frequency around the central equilibrium based upon the summed units of neuronal processes within the cerebral cortices for the moment of inertia was 1 to 2 ms which are the boundaries for the action potential of axons and the latencies for diffusion of neurotransmitters. The calculated additional energy available to each neuron within the human cerebrum during the torque condition was ~10-20 J which is the same order of magnitude as the energy associated with action potentials, resting membrane potentials, and ligand-receptor binding. It is also the basic energy at the level of the neuronal cell membrane that originates from gravitational forces upon a single cell and the local expression of the uniaxial magnetic anisotropic constant for ferritin which occurs in the brain. These results indicate that the more complex electrophysiological functions that are strongly correlated with cognitive and related human properties can be described by basic physics and may respond to specific geomagnetic spatial orientation.


2019 ◽  
Vol 26 (23) ◽  
pp. 4403-4434 ◽  
Author(s):  
Susimaire Pedersoli Mantoani ◽  
Peterson de Andrade ◽  
Talita Perez Cantuaria Chierrito ◽  
Andreza Silva Figueredo ◽  
Ivone Carvalho

Neglected Diseases (NDs) affect million of people, especially the poorest population around the world. Several efforts to an effective treatment have proved insufficient at the moment. In this context, triazole derivatives have shown great relevance in medicinal chemistry due to a wide range of biological activities. This review aims to describe some of the most relevant and recent research focused on 1,2,3- and 1,2,4-triazolebased molecules targeting four expressive NDs: Chagas disease, Malaria, Tuberculosis and Leishmaniasis.


2021 ◽  
pp. 002224372110329
Author(s):  
Nicolas Padilla ◽  
Eva Ascarza

The success of Customer Relationship Management (CRM) programs ultimately depends on the firm's ability to identify and leverage differences across customers — a very diffcult task when firms attempt to manage new customers, for whom only the first purchase has been observed. For those customers, the lack of repeated observations poses a structural challenge to inferring unobserved differences across them. This is what we call the “cold start” problem of CRM, whereby companies have difficulties leveraging existing data when they attempt to make inferences about customers at the beginning of their relationship. We propose a solution to the cold start problem by developing a probabilistic machine learning modeling framework that leverages the information collected at the moment of acquisition. The main aspect of the model is that it exibly captures latent dimensions that govern the behaviors observed at acquisition as well as future propensities to buy and to respond to marketing actions using deep exponential families. The model can be integrated with a variety of demand specifications and is exible enough to capture a wide range of heterogeneity structures. We validate our approach in a retail context and empirically demonstrate the model's ability at identifying high-value customers as well as those most sensitive to marketing actions, right after their first purchase.


Geophysics ◽  
1986 ◽  
Vol 51 (1) ◽  
pp. 12-19 ◽  
Author(s):  
James F. Mitchell ◽  
Richard J. Bolander

Subsurface structure can be mapped using refraction information from marine multichannel seismic data. The method uses velocities and thicknesses of shallow sedimentary rock layers computed from refraction first arrivals recorded along the streamer. A two‐step exploration scheme is described which can be set up on a personal computer and used routinely in any office. It is straightforward and requires only a basic understanding of refraction principles. Two case histories from offshore Peru exploration demonstrate the scheme. The basic scheme is: step (1) shallow sedimentary rock velocities are computed and mapped over an area. Step (2) structure is interpreted from the contoured velocity patterns. Structural highs, for instance, exhibit relatively high velocities, “retained” by buried, compacted, sedimentary rocks that are uplifted to the near‐surface. This method requires that subsurface structure be relatively shallow because the refracted waves probe to depths of one hundred to over one thousand meters, depending upon the seismic energy source, streamer length, and the subsurface velocity distribution. With this one requirement met, we used the refraction method over a wide range of sedimentary rock velocities, water depths, and seismic survey types. The method is particularly valuable because it works well in areas with poor seismic reflection data.


1989 ◽  
Vol 79 (4) ◽  
pp. 1177-1193
Author(s):  
Jacques Talandier ◽  
Emile A. Okal

Abstract We have developed a new magnitude scale, Mm, based on the measurement of mantle Rayleigh-wave energy in the 50 to 300 sec period range, and directly related to the seismic moment through Mm = log10M0 − 20. Measurements are taken on the first passage of Rayleigh waves, recorded on-scale on broadband instruments with adequate dynamical range. This allows estimation of the moment of an event within minutes of the arrival of the Rayleigh wave, and with a standard deviation of ±0.2 magnitude units. In turn, the knowledge of the seismic moment allows computation of an estimate of the high-seas amplitude of a range of expectable tsunami heights. The latter, combined with complementary data from T-wave duration and historical references, have been integrated into an automated procedure of tsunami warning by the Centre Polynésien de Prévention des Tsunamis (CPPT), in Papeete, Tahiti.


1992 ◽  
Vol 82 (3) ◽  
pp. 1306-1349 ◽  
Author(s):  
Javier F. Pacheco ◽  
Lynn R. Sykes

Abstract We compile a worldwide catalog of shallow (depth < 70 km) and large (Ms ≥ 7) earthquakes recorded between 1900 and 1989. The catalog is shown to be complete and uniform at the 20-sec surface-wave magnitude Ms ≥ 7.0. We base our catalog on those of Abe (1981, 1984) and Abe and Noguchi (1983a, b) for events with Ms ≥ 7.0. Those catalogs, however, are not homogeneous in seismicity rates for the entire 90-year period. We assume that global rates of seismicity are constant on a time scale of decades and most inhomogeneities arise from changes in instrumentation and/or reporting. We correct the magnitudes to produce a homogeneous catalog. The catalog is accompanied by a reference list for all the events with seismic moment determined at periods longer than 20 sec. Using these seismic moments for great and giant earthquakes and a moment-magnitude relationship for smaller events, we produce a seismic moment catalog for large earthquakes from 1900 to 1989. The catalog is used to study the distribution of moment released worldwide. Although we assumed a constant rate of seismicity on a global basis, the rate of moment release has not been constant for the 90-year period because the latter is dominated by the few largest earthquakes. We find that the seismic moment released at subduction zones during this century constitutes 90% of all the moment released by large, shallow earthquakes on a global basis. The seismic moment released in the largest event that occurred during this century, the 1960 southern Chile earthquake, represents about 30 to 45% of the total moment released from 1900 through 1989. A frequency-size distribution of earthquakes with seismic moment yields an average slope (b value) that changes from 1.04 for magnitudes between 7.0 and 7.5 to b = 1.51 for magnitudes between 7.6 and 8.0. This change in the b value is attributed to different scaling relationships between bounded (large) and unbounded (small) earthquakes. Thus, the earthquake process does have a characteristic length scale that is set by the downdip width over which rupture in earthquakes can occur. That width is typically greater for thrust events at subduction zones than for earthquakes along transform faults and other tectonic environments.


1980 ◽  
Vol 70 (5) ◽  
pp. 1759-1770
Author(s):  
Kris Kaufman ◽  
L. J. Burdick

abstract The largest swarm of earthquakes of the last few decades accompanied the collapse of the Fernandina caldera in the Galapagos Islands in June of 1968. Many of the events were relatively large. (The largest 21 had moments ranging from 6 ×1024 to 12 ×1024 dyne-cm.) They produced teleseismic WWSSN records that were spectacularly consistent from event to event. The entire wave trains of the signals were nearly identical on any given component at any given station. This indicates that the mode of strain release in the region was unusually stable and coherent. The body waveforms of the events have been modeled with synthetic seismograms. The best fault plane solution was found to be: strike = 335°, dip = 47°, and rake = 247°. The depths of all the larger shocks were close to 14 km. Previous work had suggested that the seismic energy was radiated by the collapsing caldera block at a depth of about 1 km. The new results indicate that large scale extensional faulting at depth was an important part of the multifaceted event during which the caldera collapsed.


Sign in / Sign up

Export Citation Format

Share Document