How the choice between nodal planes affects the estimate of tsunami hazard of an earthquake

Author(s):  
Anna Bolshakova ◽  
Mikhail Nosov ◽  
Sergey Kolesov ◽  
Gulnaz Nurislamova ◽  
Kirill Sementsov

<p>Usually tsunami warning is issued if a submarine earthquake is registered of magnitude exceeding a threshold, the value of which varies depending on the region where the earthquake took place and on the earthquake depth. Being simple and fast this approach is characterized by quite a low accuracy in the tsunami run-up heights estimate. The forecast accuracy can be improved if, instead of magnitude, we use the potential energy of the initial elevation in the tsunami source, calculated taking into account the earthquake focal mechanism. Automatic system for estimate of tsunami hazard using focal mechanism (Tsunami Observer, http://ocean.phys.msu.ru/projects/tsunami-observer/) was recently developed and implemented. Focal mechanisms derived from analysis of the recorded seismic waveforms has two possible solutions, i.e. two nodal planes. Short after an earthquake it is not possible to determine automatically which of the nodal planes is in fact the fault plane.</p><p>The main purpose of this study is to reveal a difference in estimates of the potential energy of the initial elevation obtained making use of the first (NP1) and the second (NP2) nodal planes. All earthquake data including focal mechanism solutions were extracted from the Bulletin of the International Seismological Centre. Totally we processed nearly 6000 earthquakes Mw>6 occurred within the time period 1976 – 2019. All calculations were performed by means of the Tsunami Observer system. It was established that the potential energy calculated with use of NP1 (E<sub>NP1</sub>) and NP2 (E<sub>NP2</sub>) datasets can vary more than an order. However for overwhelming majority of seismic events (96.3%) the difference does not exceed two times, for significant number of events (74.1%) the difference does not exceed 1.2 times. In our presentation, we shall provide detailed description of calculation methods we use and the distribution of the ratio E<sub>NP1</sub>/E<sub>NP2</sub>. Also we shall discuss the influence of the focal depth and magnitude on the ratio E<sub>NP1</sub>/E<sub>NP2</sub>.</p><p>Acknowledgements</p><p>This work was supported by the Russian Foundation for Basic Research, projects 19-05-00351, 20-07-01098, 20-35-70038</p>

2020 ◽  
Author(s):  
Viacheslav Karpov ◽  
Sergey Kolesov ◽  
Mikhail Nosov ◽  
Anna Bolshakova ◽  
Gulnaz Nurislamova ◽  
...  

<p>In this talk the fully automatic system for estimate of tsunamigenicity of an earthquake is presented. The system is focused on simplicity and speed with usage of minimum of input data. The input dataset for the system includes (1) earthquake coordinates, (2) earthquake depth, (3) seismic moment, (4) focal mechanism. We use datasets provided by USGS and GEOFON. Upon receiving earthquake data the system performs the following consecutive actions. At first, the vector field of co-seismic bottom deformation is obtained using earthquake fault parameters and empirical relationships. Then the initial elevation in tsunami source is calculated and estimation of Soloviev-Imamura tsunami intensity is performed. Initial elevation is calculated taking into account vertical and horizontal components of bottom deformation, local bathymetry (GEBCO) and smoothing effect of water layer. An auxiliary study was conducted to obtain relationship between potential energy of initial elevation of water in tsunami source and intensity of resulting tsunami. More than 200 historical events from HTDB/WLD and NGDC/WDS databases was statistically processed. The obtained relationship is used to assess the intensity of tsunami generated by earthquake under consideration. Finally, if event is considered significant (energy > 10<sup>9</sup> J), the numerical simulation of propagation of tsunami waves is performed. As a result of numerical simulation, animations of wave propagation, distribution of maximum tsunami heights, and water surface time-histories in a number of given points are produced. Details of implementation, physical constraints, future development of system as well as 2-years experience of the system operation will be discussed during the talk.</p><p><strong>Acknowledgements</strong></p><p>This work was supported by the Russian Foundation for Basic Research, projects 20-07-01098, 20-35-70038, 19-05-00351.</p>


2014 ◽  
Vol 197 (1) ◽  
pp. 620-629 ◽  
Author(s):  
Yan Y. Kagan ◽  
David D. Jackson

2021 ◽  
Author(s):  
Alberto Armigliato ◽  
Martina Zanetti ◽  
Stefano Tinti ◽  
Filippo Zaniboni ◽  
Glauco Gallotti ◽  
...  

<p>It is well known that for earthquake-generated tsunamis impacting near-field coastlines the focal mechanism, the position of the fault with respect to the coastline and the on fault slip distribution are key factors in determining the efficiency of the generation process and the distribution of the maximum run-up and inundation along the nearby coasts. The time needed to obtain the aforementioned information from the analysis of seismic records is usually too long compared to the time required to issue a timely tsunami warning/alert to the nearest coastlines. In the context of tsunami early warning systems, a big challenge is hence to be able to define 1) the relative position of the hypocenter and of the fault and 2) the earthquake focal mechanism, based only on the preliminary earthquake localization and magnitude estimation, which are made available by seismic networks soon after the earthquake occurs.</p><p>In this study, the intrinsic unpredictability of the position of the hypocenter on the fault plane is studied through a probabilistic approach based on the analysis of two finite fault model datasets (SRCMOD and USGS) and by limiting the analysis to moderate-to-large shallow earthquakes (Mw  6 and depth  50 km). After a proper homogenization procedure needed to define a common geometry for all samples in the two datasets, the hypocentral positions are fitted with different probability density functions (PDFs) separately in the along-dip and along-strike directions.</p><p>Regarding the focal mechanism determination, different approaches have been tested: the most successful is restricted to subduction-type earthquakes. It defines average values and uncertainties for strike, dip and rake angles based on a combination of a proper zonation of the main tsunamigenic subduction areas worldwide and of subduction zone geometries available from publicdatabases.</p><p>The general workflow that we propose can be schematically outlined as follows. Once an earthquake occurs and the magnitude and hypocentral solutions are made available by seismic networks, it is possible to assign the focal mechanism by selecting the characteristic values for strike, dip and rake of the zone where the hypocenter falls into. Fault length and width, as well as the slip distribution on the fault plane, are computed through regression laws against magnitude proposed by previous studies. The resulting rectangular fault plane can be discretized into a matrix of subfaults: the position of the center of each subfault can be considered as a “realization” of the hypocenter position, which can then be assigned a probability. In this way, we can define a number of earthquake fault scenarios, each of which is assigned a probability, and we can run tsunami numerical simulations for each scenario to quantify the classical observables, such as water elevation time series in selected offshore/coastal tide-gauges, flow depth, run-up, inundation distance. The final results can be provided as probabilistic distributions of the different observables.</p><p>The general approach, which is still in a proof-of-concept stage, is applied to the 16 September 2015 Illapel (Chile) tsunamigenic earthquake (Mw = 8.2). The comparison with the available tsunami observations is discussed with special attention devoted to the early-warning perspective.</p>


2021 ◽  
Author(s):  
Qiuyun Liu ◽  
Lipeng Liao ◽  
Chanyuk Lam David ◽  
Yuhan Lin ◽  
Man Tang

The interior of the Earth has smaller linear velocity than the Earth surface, but larger inertia due to gravity. This generates longer period of decelerations or accelerations in the interior producing strain with vertical and horizontal components. Faster linear velocity results in larger strain. Focal depth is the compromise of these two factors. Slender potential energy produces focal depth with hundreds of kilometers deep.


2021 ◽  
Author(s):  
Guido Maria Adinolfi ◽  
Raffaella De Matteis ◽  
Rita De Nardis ◽  
Aldo Zollo

Abstract. Improving the knowledge of seismogenic faults requires the integration of geological, seismological, and geophysical information. Among several analyses, the definition of earthquake focal mechanisms plays an essential role in providing information about the geometry of individual faults and the stress regime acting in a region. Fault plane solutions can be retrieved by several techniques operating in specific magnitude ranges, both in the time and frequency domain and using different data. For earthquakes of low magnitude, the limited number of available data and their uncertainties can compromise the stability of fault plane solutions. In this work, we propose a useful methodology to evaluate how well a seismic network used to monitor natural and/or induced micro-seismicity estimates focal mechanisms as function of magnitude, location, and kinematics of seismic source and consequently their reliability in defining seismotectonic models. To study the consistency of focal mechanism solutions, we use a Bayesian approach that jointly inverts the P/S long-period spectral-level ratios and the P polarities to infer the fault-plane solutions. We applied this methodology, by computing synthetic data, to the local seismic network operated in the Campania-Lucania Apennines (Southern Italy) to monitor the complex normal fault system activated during the Ms 6.9, 1980 earthquake. We demonstrate that the method we propose can have a double purpose. It can be a valid tool to design or to test the performance of local seismic networks and more generally it can be used to assign an absolute uncertainty to focal mechanism solutions fundamental for seismotectonic studies.


1999 ◽  
Vol 32 (5) ◽  
pp. 864-870 ◽  
Author(s):  
H. Putz ◽  
J. C. Schön ◽  
M. Jansen

A new direct-space method forabinitiosolution of crystal structures from powder diffraction diagrams is presented. The approach consists of a combined global optimization (`Pareto optimization') of the difference between the calculated and the measured diffraction pattern and of the potential energy of the system. This concept has been tested successfully on a large variety of ionic and intermetallic compounds.


2019 ◽  
Vol 1 (1) ◽  
pp. 117-152
Author(s):  
Iulia Vescan ◽  
Bianca Vitalaru

Basic research has shown that some differences between educational aspects of Spanish and American culture, such as perceptions about on roles, attitudes, communication, teaching methods and even expectations, can manifest into actual academic difficulties for American Language Assistants in Spanish bilingual schools. This paper will focus on describing the elements that, when analyzed, outline the role of Instituto Franklin-UAH as an intercultural and academic mediator between two cultures and education systems (Spain and US) and the context that justifies the different measures taken to attend to the particular needs or circumstances of the agents involved (students, teachers and academic advisors). Two perspectives will be included: a) a historical one, related to Instituto Franklin-UAH’s background and context related to bilingual teaching; b) an analytical one, focusing, on the one hand, on the perception of the agents involved and, on the other hand, on the actions that have turned Instituto Franklin-UAH into an actual mediator between its students and the schools where they act as Language Assistants. Ultimately, the paper underlines the difference in terms of the perception of the same aspects by the groups involved and the need for measures to improve the communication process between American LAs and Spanish lead teachers in bilingual schools.


2019 ◽  
Vol 45 ◽  
Author(s):  
Mark H.R. Bussin

Problemification: Some academics joined the profession from private sector late in their career. They are sometimes referred to fondly as practical academics or ‘pracademics’ because they still work in private sector and also act as a visiting professor in academia. I sit on eight boards and chair nearly half of them, and serve on audit committees and HR Remuneration committees. I am an example of a ‘pracademic’, and my induction into academia was one sentence – publish or perish. In the private sector, induction can take up to a week. I had one minute.Implications: The implication is that I had to find out what a peer-reviewed journal was and trip into the fact that some peer-reviewed journals are scams and others A rated. Telling the difference in my initial years took its toll. I continually had to ask colleagues – is this journal real? Eventually I realised the DHET list was a good starting point and I started submitting articles. I got more rejections than acceptances at first, with very little explanation. So I learnt nothing and did not know what to do to improve. I had to waste another thousand reviewer hours of time to learn what the requirement was.Research writing is guided by a personal philosophy, and it is about what types of research issues one is inclined towards. For instance, some people are naturally inclined towards basic research and others towards applied research. Others are more oriented towards theory building and testing types for the purpose of creating knowledge for the sake of knowledge. Some others are pragmatic types or realist types and believe real-world problems do not come neatly packaged and are somewhat untidily in presentation calling for discretion or judgement on what to prioritise for research and how to carry out the research. Some are scientist practitioners (evidence informed researchers) and others are practitioner-scientist (practice-led science).Perhaps this kind of orientation to research is what early career researchers need initially; then, they can worry about reproducibility of research findings down the line after grounding themselves into the research space they perceive to belong to and where they feel invested.Purpose: The purpose of this opinion article is to share my journey and sow some doubt in reply to the opinion piece circulated by Efendic and Van Zyl. Whilst I do agree with everything that is said in their article, I believe that there is additional information that needs to be considered. Context is important. Not all academics that submit articles have been in academia for many years. We need to do more to support budding authors.Recommendations: We need to be much more helpful to budding authors than just publishing a page or two called author submission guidelines. These are mostly cosmetic style guides. If we want a higher quality submission and plenty of them – then I believe we need to educate our budding authors of the requirements. Perhaps we need a detailed guide, similar in content and depth as the article of Efendic and Van Zyl (2019). We could consider a podcast setting out the technical guidelines and statistical requirements. Running courses on article publishing by the reviewers is important because that is from the horse’s mouth. Trust me; it is not just a case of sticking to the style guide. You need to really understand some of the under currents of article publishing, for example, quoting as many authors from that particular journal’s list of articles as possible.


Sign in / Sign up

Export Citation Format

Share Document