Fast location of seismicity: A migration-type approach with application to hydraulic-fracturing data

Geophysics ◽  
2007 ◽  
Vol 72 (1) ◽  
pp. S33-S40 ◽  
Author(s):  
S. Rentsch ◽  
S. Buske ◽  
S. Lüth ◽  
S. A. Shapiro

We propose a new approach for the location of seismic sources using a technique inspired by Gaussian-beam migration of three-component data. This approach requires only the preliminary picking of time intervals around a detected event and is much less sensitive to the picking precision than standard location procedures. Furthermore, this approach is characterized by a high degree of automation. The polarization information of three-component data is estimated and used to perform initial-value ray tracing. By weighting the energy of the signal using Gaussian beams around these rays, the stacking is restricted to physically relevant regions only. Event locations correspond to regions of maximum energy in the resulting image. We have successfully applied the method to synthetic data examples with 20%–30% white noise and to real data of a hydraulic-fracturing experiment, where events with comparatively small magnitudes [Formula: see text] were recorded.

Geophysics ◽  
1994 ◽  
Vol 59 (2) ◽  
pp. 297-308 ◽  
Author(s):  
Pierre D. Thore ◽  
Eric de Bazelaire ◽  
Marisha P. Rays

We compare the three‐term equation to the normal moveout (NMO) equation for several synthetic data sets to analyze whether or not it is worth making the additional computational effort in the stacking process within various exploration contexts. In our evaluation we have selected two criteria: 1)The quality of the stacked image. 2) The reliability of the stacking parameters and their usefulness for further computation such as interval velocity estimation. We have simulated the stacking process very precisely, despite using only the traveltimes and not the full waveform data. The procedure searches for maximum coherency along the traveltime curve rather than a least‐square regression to it. This technique, which we call the Gaussian‐weighted least square, avoids most of the shortcomings of the least‐square method. The following are our conclusions: 1) The three term equation gives a better stack than the regular NMO. The increase in stacking energy can be more than 30 percent. 2)The calculation of interval velocities using a DIX formula rewritten for the three‐parameter equation is much more stable and accurate than the standard DIX formula. 3) The search for the three parameters is feasible in an efficient way since the shifted hyperbola requires only static corrections rather than dy namic ones. 4) Noise alters the parameters of the maximum energy stack in a way that depends on the noise type. The estimates obtained remain accurate enough for interval velocity estimation (where only two parameters are needed), but the use of the three parameters in direct inversion may be hazardous because of noise corruption. These conclusions should, however, be verified on real data examples.


Author(s):  
P. Agrafiotis ◽  
D. Skarlatos ◽  
T. Forbes ◽  
C. Poullis ◽  
M. Skamantzari ◽  
...  

In this paper, main challenges of underwater photogrammetry in shallow waters are described and analysed. The very short camera to object distance in such cases, as well as buoyancy issues, wave effects and turbidity of the waters are challenges to be resolved. Additionally, the major challenge of all, caustics, is addressed by a new approach for caustics removal (Forbes et al., 2018) which is applied in order to investigate its performance in terms of SfM-MVS and 3D reconstruction results. In the proposed approach the complex problem of removing caustics effects is addressed by classifying and then removing them from the images. We propose and test a novel solution based on two small and easily trainable Convolutional Neural Networks (CNNs). Real ground truth for caustics is not easily available. We show how a small set of synthetic data can be used to train the network and later transfer the learning to real data with robustness to intra-class variation. The proposed solution results in caustic-free images which can be further used for other tasks as may be needed.


Author(s):  
Cheng-Han (Lance) Tsai ◽  
Jen-Yuan (James) Chang

Abstract Artificial Intelligence (AI) has been widely used in different domains such as self-driving, automated optical inspection, and detection of object locations for the robotic pick and place operations. Although the current results of using AI in the mentioned fields are good, the biggest bottleneck for AI is the need for a vast amount of data and labeling of the corresponding answers for a sufficient training. Evidentially, these efforts still require significant manpower. If the quality of the labelling is unstable, the trained AI model becomes unstable and as consequence, so do the results. To resolve this issue, the auto annotation system is proposed in this paper with methods including (1) highly realistic model generation with real texture, (2) domain randomization algorithm in the simulator to automatically generate abundant and diverse images, and (3) visibility tracking algorithm to calculate the occlusion effect objects cause on each other for different picking strategy labels. From our experiments, we will show 10,000 images can be generated per hour, each having multiple objects and each object being labelled in different classes based on their visibility. Instance segmentation AI models can also be trained with these methods to verify the gaps between performance synthetic data for training and real data for testing, indicating that even at mAP 70 the mean average precision can reach 70%!


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


2020 ◽  
Author(s):  
Eduardo Atem De Carvalho ◽  
Rogerio Atem De Carvalho

BACKGROUND Since the beginning of the COVID-19 pandemic, researchers and health authorities have sought to identify the different parameters that govern their infection and death cycles, in order to be able to make better decisions. In particular, a series of reproduction number estimation models have been presented, with different practical results. OBJECTIVE This article aims to present an effective and efficient model for estimating the Reproduction Number and to discuss the impacts of sub-notification on these calculations. METHODS The concept of Moving Average Method with Initial value (MAMI) is used, as well as a model for Rt, the Reproduction Number, is derived from experimental data. The models are applied to real data and their performance is presented. RESULTS Analyses on Rt and sub-notification effects for Germany, Italy, Sweden, United Kingdom, South Korea, and the State of New York are presented to show the performance of the methods here introduced. CONCLUSIONS We show that, with relatively simple mathematical tools, it is possible to obtain reliable values for time-dependent, incubation period-independent Reproduction Numbers (Rt). We also demonstrate that the impact of sub-notification is relatively low, after the initial phase of the epidemic cycle has passed.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


2021 ◽  
Vol 40 (3) ◽  
pp. 1-12
Author(s):  
Hao Zhang ◽  
Yuxiao Zhou ◽  
Yifei Tian ◽  
Jun-Hai Yong ◽  
Feng Xu

Reconstructing hand-object interactions is a challenging task due to strong occlusions and complex motions. This article proposes a real-time system that uses a single depth stream to simultaneously reconstruct hand poses, object shape, and rigid/non-rigid motions. To achieve this, we first train a joint learning network to segment the hand and object in a depth image, and to predict the 3D keypoints of the hand. With most layers shared by the two tasks, computation cost is saved for the real-time performance. A hybrid dataset is constructed here to train the network with real data (to learn real-world distributions) and synthetic data (to cover variations of objects, motions, and viewpoints). Next, the depth of the two targets and the keypoints are used in a uniform optimization to reconstruct the interacting motions. Benefitting from a novel tangential contact constraint, the system not only solves the remaining ambiguities but also keeps the real-time performance. Experiments show that our system handles different hand and object shapes, various interactive motions, and moving cameras.


2021 ◽  
Vol 11 (9) ◽  
pp. 3863
Author(s):  
Ali Emre Öztürk ◽  
Ergun Erçelebi

A large amount of training image data is required for solving image classification problems using deep learning (DL) networks. In this study, we aimed to train DL networks with synthetic images generated by using a game engine and determine the effects of the networks on performance when solving real-image classification problems. The study presents the results of using corner detection and nearest three-point selection (CDNTS) layers to classify bird and rotary-wing unmanned aerial vehicle (RW-UAV) images, provides a comprehensive comparison of two different experimental setups, and emphasizes the significant improvements in the performance in deep learning-based networks due to the inclusion of a CDNTS layer. Experiment 1 corresponds to training the commonly used deep learning-based networks with synthetic data and an image classification test on real data. Experiment 2 corresponds to training the CDNTS layer and commonly used deep learning-based networks with synthetic data and an image classification test on real data. In experiment 1, the best area under the curve (AUC) value for the image classification test accuracy was measured as 72%. In experiment 2, using the CDNTS layer, the AUC value for the image classification test accuracy was measured as 88.9%. A total of 432 different combinations of trainings were investigated in the experimental setups. The experiments were trained with various DL networks using four different optimizers by considering all combinations of batch size, learning rate, and dropout hyperparameters. The test accuracy AUC values for networks in experiment 1 ranged from 55% to 74%, whereas the test accuracy AUC values in experiment 2 networks with a CDNTS layer ranged from 76% to 89.9%. It was observed that the CDNTS layer has considerable effects on the image classification accuracy performance of deep learning-based networks. AUC, F-score, and test accuracy measures were used to validate the success of the networks.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 726
Author(s):  
Lamya A. Baharith ◽  
Wedad H. Aljuhani

This article presents a new method for generating distributions. This method combines two techniques—the transformed—transformer and alpha power transformation approaches—allowing for tremendous flexibility in the resulting distributions. The new approach is applied to introduce the alpha power Weibull—exponential distribution. The density of this distribution can take asymmetric and near-symmetric shapes. Various asymmetric shapes, such as decreasing, increasing, L-shaped, near-symmetrical, and right-skewed shapes, are observed for the related failure rate function, making it more tractable for many modeling applications. Some significant mathematical features of the suggested distribution are determined. Estimates of the unknown parameters of the proposed distribution are obtained using the maximum likelihood method. Furthermore, some numerical studies were carried out, in order to evaluate the estimation performance. Three practical datasets are considered to analyze the usefulness and flexibility of the introduced distribution. The proposed alpha power Weibull–exponential distribution can outperform other well-known distributions, showing its great adaptability in the context of real data analysis.


Author(s):  
Alma Andersson ◽  
Joakim Lundeberg

Abstract Motivation Collection of spatial signals in large numbers has become a routine task in multiple omics-fields, but parsing of these rich datasets still pose certain challenges. In whole or near-full transcriptome spatial techniques, spurious expression profiles are intermixed with those exhibiting an organized structure. To distinguish profiles with spatial patterns from the background noise, a metric that enables quantification of spatial structure is desirable. Current methods designed for similar purposes tend to be built around a framework of statistical hypothesis testing, hence we were compelled to explore a fundamentally different strategy. Results We propose an unexplored approach to analyze spatial transcriptomics data, simulating diffusion of individual transcripts to extract genes with spatial patterns. The method performed as expected when presented with synthetic data. When applied to real data, it identified genes with distinct spatial profiles, involved in key biological processes or characteristic for certain cell types. Compared to existing methods, ours seemed to be less informed by the genes’ expression levels and showed better time performance when run with multiple cores. Availabilityand implementation Open-source Python package with a command line interface (CLI), freely available at https://github.com/almaan/sepal under an MIT licence. A mirror of the GitHub repository can be found at Zenodo, doi: 10.5281/zenodo.4573237. Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document