Numerical Simulation of the Earthquake Generation Process

Author(s):  
Mircea Radulian ◽  
Cezar-Ioan Trifu ◽  
Florin Octavian CăRbunar
1991 ◽  
Vol 136 (4) ◽  
pp. 499-514 ◽  
Author(s):  
Mircea Radulian ◽  
Cezar-Ioan Trifu ◽  
Florin Octavian C�rbunar

Micromachines ◽  
2019 ◽  
Vol 10 (2) ◽  
pp. 94 ◽  
Author(s):  
Yanqiao Pan ◽  
Liangcai Zeng

Droplet generation process can directly affect process regulation and output performance of electrohydrodynamic jet (E-jet) printing in fabricating micro-to-nano scale functional structures. This paper proposes a numerical simulation model for whole process of droplet generation of E-jet printing based on the Taylor-Melcher leaky-dielectric model. The whole process of droplet generation is successfully simulated in one whole cycle, including Taylor cone generation, jet onset, jet break, and jet retraction. The feasibility and accuracy of the numerical simulation model is validated by a 30G stainless nozzle with inner diameter ~160 μm by E-jet printing experiments. Comparing numerical simulations and experimental results, period, velocity magnitude, four steps in an injection cycle, and shape of jet in each step are in good agreement. Further simulations are performed to reveal three design constraints against applied voltage, flow rate, and nozzle diameter, respectively. The established cone-jet numerical simulation model paves the way to investigate influences of process parameters and guide design of printheads for E-jet printing system with high performance in the future.


2019 ◽  
Vol 220 (3) ◽  
pp. 1845-1856 ◽  
Author(s):  
W Marzocchi ◽  
I Spassiani ◽  
A Stallone ◽  
M Taroni

SUMMARY An unbiased estimation of the b-value and of its variability is essential to verify empirically its physical contribution to the earthquake generation process, and the capability to improve earthquake forecasting and seismic hazard. Notwithstanding the vast literature on the b-value estimation, we note that some potential sources of bias that may lead to non-physical b-value variations are too often ignored in seismological common practice. The aim of this paper is to discuss some of them in detail, when the b-value is estimated through the popular Aki’s formula. Specifically, we describe how a finite data set can lead to biased evaluations of the b-value and its uncertainty, which are caused by the correlation between the b-value and the maximum magnitude of the data set; we quantify analytically the bias on the b-value caused by the magnitude binning; we show how departures from the exponential distribution of the magnitude, caused by a truncated Gutenberg–Richter law and by catalogue incompleteness, can affect the b-value estimation and the search for statistically significant variations; we derive explicitly the statistical distribution of the magnitude affected by random symmetrical error, showing that the magnitude error does not induce any further significant bias, at least for reasonable amplitude of the measurement error. Finally, we provide some recipes to minimize the impact of these potential sources of bias.


2008 ◽  
Vol 38 ◽  
pp. 23-28
Author(s):  
D. Shanker Shanker ◽  
Harihar Paudyal ◽  
H. N. Singh ◽  
V. P. Singh

Annually, about 100,000 earthquakes of magnitude more than three hit the earth. As a result, more than 15 million human lives have been lost and damage worth of hundreds of billions of dollars has been inflicted in the recorded history due to these disasters. More than a dozen earthquakes of Ms > 7.5 have occurred in the Himalayan region since 1897.The seismic activity in the Himalayan frontal arc is the result of continued collision between the Indian and Eurasian plates. Most of the earthquake generation models currently used for seismic hazard evaluation are based on the assumption of Poisson or other memoryless distribution, i.e. low-magnitude earthquakes follow the Poisson distribution (random event) and large-magnitude events follow the exponential distribution (non-random). The study suggests that the region has low probabilities and large mean return periods for higher-magnitude earthquakes. The earthquake generation process in Nepal Central Himalayas supports the time- and magnitude-predictable model, which is valid for 5.5< Ms <8.6. The analysis suggests that the probability of occurrence of moderate earthquakes (Ms = 5.8-6.5) in the next decade in the Central Himalayan region is very high (0.59-0.91), whereas it is very low (<0.40) for southern Tibet.


2020 ◽  
Author(s):  
Emile Okal ◽  
Costas Synolakis

&lt;p&gt;The classic approach to tsunami simulation by earthquake sources consists&lt;br&gt;of computing the vertical static deformation of the ocean bottom due to&lt;br&gt;the dislocation, using formalisms such as Mansinha and Smylie's [1971] or&lt;br&gt;Okada's [1985], and of transposing that field directly to the ocean's&lt;br&gt;surface as the initial condition of the numerical simulation.&lt;br&gt;We look into the limitations of this approach by developing a very&lt;br&gt;simple general formula for the energy of a tsunami, expressed as the&lt;br&gt;work performed against the hydrostatic pressure at the bottom of&lt;br&gt;the ocean, in excess of the simple increase in potential energy&lt;br&gt;of the displaced water, due to the irreversibility of the process.&lt;br&gt;We successfully test our results against the exact analytical solution&lt;br&gt;obtained by Hammack [1972] for the amplitude of a tsunami generated&lt;br&gt;by the exponentially-decaying uplift of a circular plug on the ocean&lt;br&gt;bottom. We define a &quot;tsunami efficiency&quot; by scaling the resulting energy&lt;br&gt;to its classical value derived, e.g., by Kajiura [1963]. As expected, we&lt;br&gt;find that sources with shorter rise times are more efficient tsunami&lt;br&gt;generators; however, an important new result is that the efficiency is&lt;br&gt;asymptotically limited, for fast sources, to a value depending on the&lt;br&gt;radius of the source, scaled to the depth of the water column; as this&lt;br&gt;ratio increases, it becomes more difficult to flush the water out of&lt;br&gt;the source area during the generation process, resulting in greater&lt;br&gt;tsunami efficiency. Fortunately, this result should not affect&lt;br&gt;significantly the generation of tusnamis by mega-earthquakes.&lt;/p&gt;


2013 ◽  
Vol 13 (1) ◽  
pp. 125-139 ◽  
Author(s):  
Y. F. Contoyiannis ◽  
S. M. Potirakis ◽  
K. Eftaxias

Abstract. The new field of complex systems supports the view that a number of systems arising from disciplines as diverse as physics, biology, engineering, and economics may have certain quantitative features that are intriguingly similar. The Earth is a living planet where many complex systems run perfectly without stopping at all. The earthquake generation is a fundamental sign that the Earth is a living planet. Recently, analyses have shown that human-brain-type disease appears during the earthquake generation process. Herein, we show that human-heart-type disease appears during the earthquake preparation of the earthquake process. The investigation is mainly attempted by means of critical phenomena, which have been proposed as the likely paradigm to explain the origins of both heart electric fluctuations and fracture-induced electromagnetic fluctuations. We show that a time window of the damage evolution within the heterogeneous Earth's crust and the healthy heart's electrical action present the characteristic features of the critical point of a thermal second-order phase transition. A dramatic breakdown of critical characteristics appears in the tail of the fracture process of heterogeneous system and the injured heart's electrical action. Analyses by means of Hurst exponent and wavelet decomposition further support the hypothesis that a dynamical analogy exists between the geological and biological systems under study.


Author(s):  
K. Truyaert ◽  
S. Delrue ◽  
V. Aleshin ◽  
K. Van Den Abeele

2019 ◽  
Vol 16 (4) ◽  
pp. 707-716
Author(s):  
Li Zhang ◽  
Huawei Yu ◽  
Wenbao Jia ◽  
Yinhui Wang ◽  
Jingkai Qu

Abstract D-D source has a promising prospect of application in the field of controllable source density logging. However, the spatial distribution of a D-D ‘induced γ-ray source’ varies significantly and such a source is more susceptible to the influence of various formation factors, resulting in relatively low accuracy of density measurement. This study researched the spatial distribution of an induced γ-ray source. First, the principle of D-D controllable source density measurement was analyzed. Second, the generation process of a D-D induced γ-ray source and the spatial distribution under different formation conditions were simulated and studied. Finally, the associated influential factors were summarized. The results indicate that the spatial position and intensity of the induced γ-ray source were susceptible to the influence of various formation factors, such as HI, lithology and salinity. Among these factors, HI had greatest impact on the spatial position of induced γ-ray source and, particularly, when formation HI varied within the range of 0–0.1, the spatial positions of capture γ-rays changed significantly. In addition, as HI increased, the intensity of γ-rays also increased gradually. Formation lithology and salinity had a greater impact on the intensity of induced γ-rays than on the spatial distribution of these γ-rays. For formations of different lithologies, as the types and contents of main elements were different, the intensity of capture γ-rays also varied. This research provides the basic data for correcting the effects on a D-D induced γ-ray source and establishing a method of density measurement using a D-D controllable source.


Sign in / Sign up

Export Citation Format

Share Document