Estimation of primaries by sparse inversion with scattering-based multiple predictions for data with large gaps

Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. V183-V197 ◽  
Author(s):  
Tim T.Y. Lin ◽  
Felix J. Herrmann

We have solved the estimation of primaries by sparse inversion problem for a seismic record with large near-offset gaps and other contiguous holes in the acquisition grid without relying on explicit reconstruction of the missing data. Eliminating the unknown data as an explicit inversion variable is desirable because it sidesteps possible issues arising from overfitting the primary model to the estimated data. Instead, we have simulated their multiple contributions by augmenting the forward prediction model for the total wavefield with a scattering series that mimics the action of the free surface reflector within the area of the unobserved trace locations. Each term in this scattering series involves convolution of the total predicted wavefield once more with the current estimated Green’s function for a medium without the free surface at these unobserved locations. It is important to note that our method cannot by itself mitigate regular undersampling issues that result in significant aliases when computing the multiple contributions, such as source-receiver sampling differences or crossline spacing issues in 3D acquisition. We have investigated algorithms that handle the nonlinearity in the modeling operator due to the scattering terms, and we also determined that just a few of the terms can be enough to satisfactorily mitigate the effects of near-offset data gaps during the inversion process. Numerical experiments on synthetic data found that the final derived method can significantly outperform explicit data reconstruction for large near-offset gaps, with a similar computational cost and better memory efficiency. We have also found on real data that our scheme outperforms the unmodified primary estimation method that uses an existing Radon-based interpolation of the near-offset gap.

Geophysics ◽  
2009 ◽  
Vol 74 (3) ◽  
pp. A23-A28 ◽  
Author(s):  
G. J. van Groenestijn ◽  
D. J. Verschuur

Accurate removal of surface-related multiples remains a challenge in many cases. To overcome typical inaccuracies in current multiple-removal techniques, we have developed a new primary-estimation method: estimation of primaries by sparse inversion (EPSI). EPSI is based on the same primary-multiple model as surface-related multiple elimination (SRME) and also requires no subsurface model. Unlike SRME, EPSI estimates the primaries as unknowns in a multidimensional inversion process rather than in a subtraction process. Furthermore, it does not depend on interpolated missing near-offset data because it can reconstruct missing data simultaneously. Sparseness plays a key role in the new primary-estimation procedure. The method was tested on 2D synthetic data.


2013 ◽  
Vol 748 ◽  
pp. 590-594
Author(s):  
Li Liao ◽  
Yong Gang Lu ◽  
Xu Rong Chen

We propose a novel density estimation method using both the k-nearest neighbor (KNN) graph and the potential field of the data points to capture the local and global data distribution information respectively. The clustering is performed based on the computed density values. A forest of trees is built using each data point as the tree node. And the clusters are formed according to the trees in the forest. The new clustering method is evaluated by comparing with three popular clustering methods, K-means++, Mean Shift and DBSCAN. Experiments on two synthetic data sets and one real data set show that our approach can effectively improve the clustering results.


Geophysics ◽  
2003 ◽  
Vol 68 (2) ◽  
pp. 641-655 ◽  
Author(s):  
Anders Sollid ◽  
Bjørn Ursin

Scattering‐angle migration maps seismic prestack data directly into angle‐dependent reflectivity at the image point. The method automatically accounts for triplicated rayfields and is easily extended to handle anisotropy. We specify scattering‐angle migration integrals for PP and PS ocean‐bottom seismic (OBS) data in 3D and 2.5D elastic media exhibiting weak contrasts and weak anisotropy. The derivation is based on the anisotropic elastic Born‐Kirchhoff‐Helmholtz surface scattering integral. The true‐amplitude weights are chosen such that the amplitude versus angle (AVA) response of the angle gather is equal to the Born scattering coefficient or, alternatively, the linearized reflection coefficient. We implement scattering‐angle migration by shooting a fan of rays from the subsurface point to the acquisition surface, followed by integrating the phase‐ and amplitude‐corrected seismic data over the migration dip at the image point while keeping the scattering‐angle fixed. A dense summation over migration dip only adds a minor additional cost and enhances the coherent signal in the angle gathers. The 2.5D scattering‐angle migration is demonstrated on synthetic data and on real PP and PS data from the North Sea. In the real data example we use a transversely isotropic (TI) background model to obtain depth‐consistent PP and PS images. The aim of the succeeding AVA analysis is to predict the fluid type in the reservoir sand. Specifically, the PS stack maps the contrasts in lithology while being insensitive to the fluid fill. The PP large‐angle stack maps the oil‐filled sand but shows no response in the brine‐filled zones. A comparison to common‐offset Kirchhoff migration demonstrates that, for the same computational cost, scattering‐angle migration provides common image gathers with less noise and fewer artifacts.


2019 ◽  
Vol 214 ◽  
pp. 06003 ◽  
Author(s):  
Kamil Deja ◽  
Tomasz Trzcin´ski ◽  
Łukasz Graczykowski

Simulating the detector response is a key component of every highenergy physics experiment. The methods used currently for this purpose provide high-fidelity results. However, this precision comes at a price of a high computational cost. In this work, we introduce our research aiming at fast generation of the possible responses of detector clusters to particle collisions. We present the results for the real-life example of the Time Projection Chamber in the ALICE experiment at CERN. The essential component of our solution is a generative model that allows to simulate synthetic data points that bear high similarity to the real data. Leveraging recent advancements in machine learning, we propose to use conditional Generative Adversarial Networks. In this work we present a method to simulate data samples possible to record in the detector based on the initial information about particles. We propose and evaluate several models based on convolutional or recursive networks. The main advantage offered by the proposed method is a significant speed-up in the execution time, reaching up to the factor of 102 with respect to the currently used simulation tool. Nevertheless, this speed-up comes at a price of a lower simulation quality. In this work we adapt available methods and show their quantitative and qualitative limitations.


2020 ◽  
Author(s):  
Brydon Lowney ◽  
Ivan Lokmer ◽  
Gareth Shane O'Brien ◽  
Christopher Bean

<p>Diffractions are a useful aspect of the seismic wavefield and are often underutilised. By separating the diffractions from the rest of the wavefield they can be used for various applications such as velocity analysis, structural imaging, and wavefront tomography. However, separating the diffractions is a challenging task due to the comparatively low amplitudes of diffractions as well as the overlap between reflection and diffraction energy. Whilst there are existing analytical methods for separation, these act to remove reflections, leaving a volume which contains diffractions and noise. On top of this, analytical separation techniques can be costly computationally as well as requiring manual parameterisation. To alleviate these issues, a deep neural network has been trained to automatically identify and separate diffractions from reflections and noise on pre-migration data.</p><p>Here, a Generative Adversarial Network (GAN) has been trained for the automated separation. This is a type of deep neural network architecture which contains two neural networks which compete against one another. One neural network acts as a generator, creating new data which appears visually similar to the real data, while a second neural network acts as a discriminator, trying to identify whether the given data is real or fake. As the generator improves, so too does the discriminator, giving a deeper understanding of the data. To avoid overfitting to a specific dataset as well as to improve the cross-data applicability of the network, data from several different seismic datasets from geologically distinct locations has been used in training. When comparing a network trained on a single dataset compared to one trained on several datasets, it is seen that providing additional data improves the separation on both the original and new datasets.</p><p>The automatic separation technique is then compared with a conventional, analytical, separation technique; plane-wave destruction (PWD). The computational cost of the GAN separation is vastly superior to that of PWD, performing a separation in minutes on a 3-D dataset in comparison to hours. Although in some complex areas the GAN separation is of a higher quality than the PWD separation, as it does not rely on the dip, there are also areas where the PWD outperforms the GAN separation. The GAN may be enhanced by adding more training data as well as by improving the initial separation used to create the training data, which is based around PWD and thus is imperfect and can introduce bias into the network. A potential for this is training the GAN entirely using synthetic data, which allows for a perfect separation as the points are known, however, it must be of sufficient volume for training and sufficient quality for real data applicability.</p>


Geophysics ◽  
2008 ◽  
Vol 73 (2) ◽  
pp. T1-T10 ◽  
Author(s):  
Gerrit Toxopeus ◽  
Jan Thorbecke ◽  
Kees Wapenaar ◽  
Steen Petersen ◽  
Evert Slob ◽  
...  

The simulation of migrated and inverted data is hampered by the high computational cost of generating 3D synthetic data, followed by processes of migration and inversion. For example, simulating the migrated seismic signature of subtle stratigraphic traps demands the expensive exercise of 3D forward modeling, followed by 3D migration of the synthetic seismograms. This computational cost can be overcome using a strategy for simulating migrated and inverted data by filtering a geologic model with 3D spatial-resolution and angle filters, respectively. A key property of the approach is this: The geologic model that describes a target zone is decoupled from the macrovelocity model used to compute the filters. The process enables a target-orientedapproach, by which a geologically detailed earth model describing a reservoir is adjusted without having to recalculate the filters. Because a spatial-resolution filter combines the results of the modeling and migration operators, the simulated images can be compared directly to a real migration image. We decompose the spatial-resolution filter into two parts and show that applying one of those parts produces output directly comparable to 1D inverted real data. Two-dimensional synthetic examples that include seismic uncertainties demonstrate the usefulness of the approach. Results from a real data example show that horizontal smearing, which is not simulated by the 1D convolution model result, is essential to understand the seismic expression of the deformation related to sulfate dissolution and karst collapse.


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


2021 ◽  
Vol 40 (3) ◽  
pp. 1-12
Author(s):  
Hao Zhang ◽  
Yuxiao Zhou ◽  
Yifei Tian ◽  
Jun-Hai Yong ◽  
Feng Xu

Reconstructing hand-object interactions is a challenging task due to strong occlusions and complex motions. This article proposes a real-time system that uses a single depth stream to simultaneously reconstruct hand poses, object shape, and rigid/non-rigid motions. To achieve this, we first train a joint learning network to segment the hand and object in a depth image, and to predict the 3D keypoints of the hand. With most layers shared by the two tasks, computation cost is saved for the real-time performance. A hybrid dataset is constructed here to train the network with real data (to learn real-world distributions) and synthetic data (to cover variations of objects, motions, and viewpoints). Next, the depth of the two targets and the keypoints are used in a uniform optimization to reconstruct the interacting motions. Benefitting from a novel tangential contact constraint, the system not only solves the remaining ambiguities but also keeps the real-time performance. Experiments show that our system handles different hand and object shapes, various interactive motions, and moving cameras.


2021 ◽  
Vol 11 (9) ◽  
pp. 3863
Author(s):  
Ali Emre Öztürk ◽  
Ergun Erçelebi

A large amount of training image data is required for solving image classification problems using deep learning (DL) networks. In this study, we aimed to train DL networks with synthetic images generated by using a game engine and determine the effects of the networks on performance when solving real-image classification problems. The study presents the results of using corner detection and nearest three-point selection (CDNTS) layers to classify bird and rotary-wing unmanned aerial vehicle (RW-UAV) images, provides a comprehensive comparison of two different experimental setups, and emphasizes the significant improvements in the performance in deep learning-based networks due to the inclusion of a CDNTS layer. Experiment 1 corresponds to training the commonly used deep learning-based networks with synthetic data and an image classification test on real data. Experiment 2 corresponds to training the CDNTS layer and commonly used deep learning-based networks with synthetic data and an image classification test on real data. In experiment 1, the best area under the curve (AUC) value for the image classification test accuracy was measured as 72%. In experiment 2, using the CDNTS layer, the AUC value for the image classification test accuracy was measured as 88.9%. A total of 432 different combinations of trainings were investigated in the experimental setups. The experiments were trained with various DL networks using four different optimizers by considering all combinations of batch size, learning rate, and dropout hyperparameters. The test accuracy AUC values for networks in experiment 1 ranged from 55% to 74%, whereas the test accuracy AUC values in experiment 2 networks with a CDNTS layer ranged from 76% to 89.9%. It was observed that the CDNTS layer has considerable effects on the image classification accuracy performance of deep learning-based networks. AUC, F-score, and test accuracy measures were used to validate the success of the networks.


Sign in / Sign up

Export Citation Format

Share Document