scholarly journals Fault network reconstruction using agglomerative clustering: applications to southern Californian seismicity

2020 ◽  
Vol 20 (12) ◽  
pp. 3611-3625
Author(s):  
Yavor Kamer ◽  
Guy Ouillon ◽  
Didier Sornette

Abstract. In this paper we introduce a method for fault network reconstruction based on the 3D spatial distribution of seismicity. One of the major drawbacks of statistical earthquake models is their inability to account for the highly anisotropic distribution of seismicity. Fault reconstruction has been proposed as a pattern recognition method aiming to extract this structural information from seismicity catalogs. Current methods start from simple large-scale models and gradually increase the complexity trying to explain the small-scale features. In contrast the method introduced here uses a bottom-up approach that relies on initial sampling of the small-scale features and reduction of this complexity by optimal local merging of substructures. First, we describe the implementation of the method through illustrative synthetic examples. We then apply the method to the probabilistic absolute hypocenter catalog KaKiOS-16, which contains three decades of southern Californian seismicity. To reduce data size and increase computation efficiency, the new approach builds upon the previously introduced catalog condensation method that exploits the heterogeneity of the hypocenter uncertainties. We validate the obtained fault network through a pseudo prospective spatial forecast test and discuss possible improvements for future studies. The performance of the presented methodology attests to the importance of the non-linear techniques used to quantify location uncertainty information, which is a crucial input for the large-scale application of the method. We envision that the results of this study can be used to construct improved models for the spatiotemporal evolution of seismicity.

2020 ◽  
Author(s):  
Yavor Kamer ◽  
Guy Ouillon ◽  
Didier Sornette

Abstract. In this paper we introduce a method for fault network reconstruction based on the 3D spatial distribution of seismicity. One of the major drawbacks of statistical earthquake models is their inability to account for the highly anisotropic distribution of seismicity. Fault reconstruction has been proposed as a pattern recognition method aiming to extract this structural information from seismicity catalogs. Current methods start from simple large scale models and gradually increase the complexity trying to explain the small scale features. In contrast the method introduced here uses a bottom-up approach, that relies on initial sampling of the small scale features and reduction of this complexity by optimal local merging of substructures. First, we describe the implementation of the method through illustrative synthetic examples. We then apply the method to the probabilistic absolute hypocenter catalog KaKiOS-16, which contains three decades of South Californian seismicity. To reduce data size and increase computation efficiency, the new approach builds upon the previously introduced catalog condensation method that exploits the heterogeneity of the hypocenter uncertainties. We validate the obtained fault network through a pseudo prospective spatial forecast test and discuss possible improvements for future studies. The performance of the presented methodology attests the importance of the non-linear techniques used to quantify location uncertainty information, which is a crucial input for the large scale application of the method. We envision that the results of this study can be used to construct improved models for the spatio-temporal evolution of seismicity.


2021 ◽  
Vol 25 (5) ◽  
pp. 1153-1168
Author(s):  
Bentian Li ◽  
Dechang Pi ◽  
Yunxia Lin ◽  
Izhar Ahmed Khan

Biological network classification is an eminently challenging task in the domain of data mining since the networks contain complex structural information. Conventional biochemical experimental methods and the existing intelligent algorithms still suffer from some limitations such as immense experimental cost and inferior accuracy rate. To solve these problems, in this paper, we propose a novel framework for Biological graph classification named Biogc, which is specifically developed to predict the label of both small-scale and large-scale biological network data flexibly and efficiently. Our framework firstly presents a simplified graph kernel method to capture the structural information of each graph. Then, the obtained informative features are adopted to train different scale biological network data-oriented classifiers to construct the prediction model. Extensive experiments on five benchmark biological network datasets on graph classification task show that the proposed model Biogc outperforms the state-of-the-art methods with an accuracy rate of 98.90% on a larger dataset and 99.32% on a smaller dataset.


2021 ◽  
Author(s):  
Camilla Fiorini ◽  
Long Li ◽  
Étienne Mémin

<p>In this work we consider the surface quasi-geostrophic (SQG) system under location uncertainty (LU) and propose a Milstein-type scheme for these equations. The LU framework, first introduced in [1], is based on the decomposition of the Lagrangian velocity into two components: a large-scale smooth component and a small-scale stochastic one. This decomposition leads to a stochastic transport operator, and one can, in turn, derive the stochastic LU version of every classical fluid-dynamics system.<span> </span></p><p>    SQG is a simple 2D oceanic model with one partial differential equation, which models the stochastic transport of the buoyancy, and an operator which relies the velocity and the buoyancy.</p><p><span>    </span>For this kinds of equations, the Euler-Maruyama scheme converges with weak order 1 and strong order 0.5. Our aim is to develop higher order schemes in time: the first step is to consider Milstein scheme, which improves the strong convergence to the order 1. To do this, it is necessary to simulate or estimate the Lévy area [2].</p><p><span>    </span>We show with some numerical results how the Milstein scheme is able to capture some of the smaller structures of the dynamic even at a poor resolution.<span> </span></p><p><strong>References</strong></p><p>[1] E. Mémin. Fluid flow dynamics under location uncertainty. <em>Geophysical & Astrophysical Fluid Dynamics</em>, 108.2 (2014): 119-146.<span> </span></p><p>[2] J. Foster, T. Lyons and H. Oberhauser. An optimal polynomial approximation of Brownian motion. <em>SIAM Journal on Numerical Analysis</em> 58.3 (2020): 1393-1421.</p>


Geophysics ◽  
2013 ◽  
Vol 78 (5) ◽  
pp. B287-B303 ◽  
Author(s):  
Anne-Sophie Høyer ◽  
Ingelise Møller ◽  
Flemming Jørgensen

Glaciotectonic complexes have been recognized worldwide — traditionally described on the basis of outcrops or geomorphological observations. In the past few decades, geophysics has become an integral part of geologic mapping, which enables the mapping of buried glaciotectonic complexes. The geophysical methods provide different types of information and degrees of resolution and thus, a different ability to resolve the glaciotectonic structures. We evaluated these abilities on the basis of an integrated application of four commonly used geophysical methods: airborne transient electromagnetics, high-resolution reflection seismic, geoelectrical, and ground-penetrating radar (GPR). We covered an area of [Formula: see text] in a formerly glaciated region in the western part of Denmark. The geologic setting was highly heterogeneous with glaciotectonic deformation observed in the form of large-scale structures in the seismic and airborne transient electromagnetic data to small-scale structures seen in the GPR and geoelectrical data. The seismic and GPR data provided detailed structural information, whereas the geoelectrical and electromagnetic data provided indirect lithological information through resistivities. A combination of methods with a wide span in resolution capabilities can therefore be recommendable to characterize and understand the geologic setting. The sequence of application of the different methods is primarily determined by the gross expenditure required for acquisition and processing, e.g., per kilometer of the surveys. Our experience suggested that airborne electromagnetic data should be acquired initially to obtain a 3D impression of the geologic setting. Based on these data, areas can be selected for further investigation with the more detailed but also more expensive and time-consuming methods.


Author(s):  
Pengcheng Ye ◽  
Guang Pan ◽  
Shan Gao

In engineering design optimization, the optimal sampling design method is usually used to solve large-scale and complex system problems. A sampling design (FOLHD) method of fast optimal Latin hypercube is proposed in order to overcome the time-consuming and poor efficiency of the traditional optimal sampling design methods. FOLHD algorithm is based on the inspiration that a near optimal large-scale Latin hypercube design can be established by a small-scale initial sample generated by using Successive Local Enumeration method and Translational Propagation algorithm. Moreover, a sampling resizing strategy is presented to generate samples with arbitrary size and owing good space-filling and projective properties. Comparing with the several existing sampling design methods, FOLHD is much more efficient in terms of the computation efficiency and sampling properties.


2000 ◽  
Vol 45 (4) ◽  
pp. 396-398
Author(s):  
Roger Smith
Keyword(s):  

2020 ◽  
Vol 1 (1) ◽  
pp. 1-10
Author(s):  
Evi Rahmawati ◽  
Irnin Agustina Dwi Astuti ◽  
N Nurhayati

IPA Integrated is a place for students to study themselves and the surrounding environment applied in daily life. Integrated IPA Learning provides a direct experience to students through the use and development of scientific skills and attitudes. The importance of integrated IPA requires to pack learning well, integrated IPA integration with the preparation of modules combined with learning strategy can maximize the learning process in school. In SMP 209 Jakarta, the value of the integrated IPA is obtained from 34 students there are 10 students completed and 24 students are not complete because they get the value below the KKM of 68. This research is a development study with the development model of ADDIE (Analysis, Design, Development, Implementation, and Evaluation). The use of KPS-based integrated IPA modules (Science Process sSkills) on the theme of rainbow phenomenon obtained by media expert validation results with an average score of 84.38%, average material expert 82.18%, average linguist 75.37%. So the average of all aspects obtained by 80.55% is worth using and tested to students. The results of the teacher response obtained 88.69% value with excellent criteria. Student responses on a small scale acquired an average score of 85.19% with highly agreed criteria and on the large-scale student response gained a yield of 86.44% with very agreed criteria. So the module can be concluded receiving a good response by the teacher and students.


2019 ◽  
Vol 61 (1) ◽  
pp. 5-13 ◽  
Author(s):  
Loretta Lees

Abstract Gentrification is no-longer, if it ever was, a small scale process of urban transformation. Gentrification globally is more often practised as large scale urban redevelopment. It is state-led or state-induced. The results are clear – the displacement and disenfranchisement of low income groups in favour of wealthier in-movers. So, why has gentrification come to dominate policy making worldwide and what can be done about it?


Sign in / Sign up

Export Citation Format

Share Document