Joint seismic and geodetic transdimensional earthquake source optimization guided by multi-array teleseismic backprojection

Author(s):  
Andreas Steinberg ◽  
Henriette Sudhaus ◽  
Frank Krüger ◽  
Hannes Vasyura-Bathke ◽  
Simon Daout ◽  
...  

<p>Earthquakes have been observed to initiate and terminate near geometrical irregularities (bends, step-overs, branching of secondary faults). Rupture segmentationinfluences the seismic radiation and therefore, the related seismic hazard. Good imaging of rupture segmentation helps to characterize fault geometries at depth for follow-up tectonic, stress-field or other analyses. From reported earthquake source models it appears that large earthquakes with magnitudes above 7 are most often segmented, while earthquakes with magnitudes below 6.5 most often are not. If this observationreflects nature or if it is rather an artifact of our abilities to well observe and infer earthquake sources can not be answered without an objective strategy to constrain rupture complexity. However, data-driven analyses of rupture segmentation are not often conducted in source modeling as it is mostly pre-defined through a given and fixed number of sources. </p><p>We, here, propose a segmentation-sensitive source analysis by combining a model-independent teleseismic back-projection and image segmentation methods with a kinematic fault inversion. Our approach is twofold. We first develop a time-domain multi-array back-projection of teleseismic data with robust estimations of uncertainties based on bootstrapping of the travel-time models and array weights (Palantiri software, https://braunfuss.github.io/Palantiri/). Backprojection has proven to be a powerful tool to infer rupture propagation from teleseismic data and identify irregularities of the rupture process over time.</p><p>We then model the earthquake sources with the results obtained from the backprojection and additional information obtained from the application of image segmentation methods to the InSAR displacement maps. For this second step, we use a combination of different observations (teleseismic waveforms and surface displacement maps based on InSAR) to increase the resolution on the spatio-temporal evolution of fault slip. We develop a novel Informational criterion based transdimensional optimization scheme to model an adequate representation of the source complexity. We present our method on two cases study: the 2016 Muji Mw 6.7 earthquake (Pamir) and the 2008-2009 Qaidam (Tibet) sequence of earthquakes. We find that the 2008 Qaidam earthquake ruptures one segment, the 2016 Muji earthquake on two segments and the Qaidam 2009 earthquake on two or three segments.</p><p>This work is based on the open-source, python-based Pyrocko toolbox and is conducted within the project “Bridging Geodesy and Seismology” (www.bridges.uni-kiel.de<http://www.bridges.uni-kiel.de>) funded by the DFG through an Emmy-Noether grant.</p><p> </p>

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Changyong Li ◽  
Yongxian Fan ◽  
Xiaodong Cai

Abstract Background With the development of deep learning (DL), more and more methods based on deep learning are proposed and achieve state-of-the-art performance in biomedical image segmentation. However, these methods are usually complex and require the support of powerful computing resources. According to the actual situation, it is impractical that we use huge computing resources in clinical situations. Thus, it is significant to develop accurate DL based biomedical image segmentation methods which depend on resources-constraint computing. Results A lightweight and multiscale network called PyConvU-Net is proposed to potentially work with low-resources computing. Through strictly controlled experiments, PyConvU-Net predictions have a good performance on three biomedical image segmentation tasks with the fewest parameters. Conclusions Our experimental results preliminarily demonstrate the potential of proposed PyConvU-Net in biomedical image segmentation with resources-constraint computing.


2011 ◽  
Vol 07 (01) ◽  
pp. 155-171 ◽  
Author(s):  
H. D. CHENG ◽  
YANHUI GUO ◽  
YINGTAO ZHANG

Image segmentation is an important component in image processing, pattern recognition and computer vision. Many segmentation algorithms have been proposed. However, segmentation methods for both noisy and noise-free images have not been studied in much detail. Neutrosophic set (NS), a part of neutrosophy theory, studies the origin, nature, and scope of neutralities, as well as their interaction with different ideational spectra. However, neutrosophic set needs to be specified and clarified from a technical point of view for a given application or field to demonstrate its usefulness. In this paper, we apply neutrosophic set and define some operations. Neutrosphic set is integrated with an improved fuzzy c-means method and employed for image segmentation. A new operation, α-mean operation, is proposed to reduce the set indeterminacy. An improved fuzzy c-means (IFCM) is proposed based on neutrosophic set. The computation of membership and the convergence criterion of clustering are redefined accordingly. We have conducted experiments on a variety of images. The experimental results demonstrate that the proposed approach can segment images accurately and effectively. Especially, it can segment the clean images and the images having different gray levels and complex objects, which is the most difficult task for image segmentation.


2014 ◽  
Vol 945-949 ◽  
pp. 1899-1902
Author(s):  
Yuan Yuan Fan ◽  
Wei Jiang Li ◽  
Feng Wang

Image segmentation is one of the basic problems of image processing, also is the first essential and fundamental issue in the solar image analysis and pattern recognition. This paper summarizes systematically on the image segmentation techniques in the solar image retrieval and the recent applications of image segmentation. Then the merits and demerits of each method are discussed in this paper, in this way we can combine some methods for image segmentation to reach the better effects in astronomy. Finally, according to the characteristics of the solar image itself, the more appropriate image segmentation methods are summed up, and some remarks on the prospects and development of image segmentation are presented.


2014 ◽  
Vol 1 (2) ◽  
pp. 62-74 ◽  
Author(s):  
Payel Roy ◽  
Srijan Goswami ◽  
Sayan Chakraborty ◽  
Ahmad Taher Azar ◽  
Nilanjan Dey

In the domain of image processing, image segmentation has become one of the key application that is involved in most of the image based operations. Image segmentation refers to the process of breaking or partitioning any image. Although, like several image processing operations, image segmentation also faces some problems and issues when segmenting process becomes much more complicated. Previously lot of work has proved that Rough-set theory can be a useful method to overcome such complications during image segmentation. The Rough-set theory helps in very fast convergence and in avoiding local minima problem, thereby enhancing the performance of the EM, better result can be achieved. During rough-set-theoretic rule generation, each band is individualized by using the fuzzy-correlation-based gray-level thresholding. Therefore, use of Rough-set in image segmentation can be very useful. In this paper, a summary of all previous Rough-set based image segmentation methods are described in detail and also categorized accordingly. Rough-set based image segmentation provides a stable and better framework for image segmentation.


2014 ◽  
Vol 548-549 ◽  
pp. 1179-1184 ◽  
Author(s):  
Wen Ting Yu ◽  
Jing Ling Wang ◽  
Long Ye

Image segmentation with low computational burden has been highly regarded as important goal for researchers. One of the popular image segmentation methods is normalized cut algorithm. But it is unfavorable for high resolution image segmentation because the amount of segmentation computation is very huge [1]. To solve this problem, we propose a novel approach for high resolution image segmentation based on the Normalized Cuts. The proposed method preprocesses an image by using the normalized cut algorithm to form segmented regions, and then use k-Means clustering on the regions. The experimental results verify that the proposed algorithm behaves an improved performance comparing to the normalized cut algorithm.


2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Yann Gavet ◽  
Jean-Charles Pinoli

The cornea is the front of the eye. Its inner cell layer, called the endothelium, is important because it is closely related to the light transparency of the cornea. An in vivo observation of this layer is performed by using specular microscopy to evaluate the health of the cells: a high spatial density will result in a good transparency. Thus, the main criterion required by ophthalmologists is the cell density of the cornea endothelium, mainly obtained by an image segmentation process. Different methods can perform the image segmentation of these cells, and the three most performing methods are studied here. The question for the ophthalmologists is how to choose the best algorithm and to obtain the best possible results with it. This paper presents a methodology to compare these algorithms together. Moreover, by the way of geometric dissimilarity criteria, the algorithms are tuned up, and the best parameter values are thus proposed to the expert ophthalmologists.


Sign in / Sign up

Export Citation Format

Share Document