Determination of a 2D Velocity Macro-Model from Prestack Traveltime Inversion of a Marine Seismic Data Set in Ionian Sea, Greece

Author(s):  
I. F. Louis ◽  
A. P. Vafidis ◽  
C. C. Macropoulos
Geophysics ◽  
2021 ◽  
pp. 1-83
Author(s):  
Mohammed Outhmane Faouzi Zizi ◽  
Pierre Turquais

For a marine seismic survey, the recorded and processed data size can reach several terabytes. Storing seismic data sets is costly and transferring them between storage devices can be challenging. Dictionary learning has been shown to provide representations with a high level of sparsity. This method stores the shape of the redundant events once, and represents each occurrence of these events with a single sparse coefficient. Therefore, an efficient dictionary learning based compression workflow, which is specifically designed for seismic data, is developed here. This compression method differs from conventional compression methods in three respects: 1) the transform domain is not predefined but data-driven; 2) the redundancy in seismic data is fully exploited by learning small-sized dictionaries from local windows of the seismic shot gathers; 3) two modes are proposed depending on the geophysical application. Based on a test seismic data set, we demonstrate superior performance of the proposed workflow in terms of compression ratio for a wide range of signal-to-residual ratios, compared to standard seismic data methods, such as the zfp software or algorithms from the Seismic Unix package. Using a more realistic data set of marine seismic acquisition, we evaluate the capability of the proposed workflow to preserve the seismic signal for different applications. For applications such as near-real time transmission and long-term data storage, we observe insignificant signal leakage on a 2D line stack when the dictionary learning method reaches a compression ratio of 24.85. For other applications such as visual QC of shot gathers, our method preserves the visual aspect of the data even when a compression ratio of 95 is reached.


2020 ◽  
Vol 223 (3) ◽  
pp. 1888-1898
Author(s):  
Kirill Gadylshin ◽  
Ilya Silvestrov ◽  
Andrey Bakulin

SUMMARY We propose an advanced version of non-linear beamforming assisted by artificial intelligence (NLBF-AI) that includes additional steps of encoding and interpolating of wavefront attributes using inpainting with deep neural network (DNN). Inpainting can efficiently and accurately fill the holes in waveform attributes caused by acquisition geometry gaps and data quality issues. Inpainting with DNN delivers excellent quality of interpolation with the negligible computational effort and performs particularly well for a challenging case of irregular holes where other interpolation methods struggle. Since conventional brute-force attribute estimation is very costly, we can further intentionally create additional holes or masks to restrict expensive conventional estimation to a smaller subvolume and obtain missing attributes with cost-effective inpainting. Using a marine seismic data set with ocean bottom nodes, we show that inpainting can reliably recover wavefront attributes even with masked areas reaching 50–75 per cent. We validate the quality of the results by comparing attributes and enhanced data from NLBF-AI and conventional NLBF using full-density data without decimation.


Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. T347-T362 ◽  
Author(s):  
Elsa Cecconello ◽  
Endrias G. Asgedom ◽  
Walter Söllner

Seismic source deghosting and sea-surface-related demultiple have been long-standing problems in marine seismic data processing. Although the receiver ghost problem may be considered as solved by using collocated measurement of pressure and normal velocity wavefields, the source deghosting and demultiple algorithms are still limited by assumptions related to the sea-surface condition. We have investigated the impact of a time-varying rough sea surface on source deghosting and demultiple. Starting from Rayleigh’s reciprocity theorem for time-varying sea surfaces, we uncover a fundamental limitation for source deghosting of time-dependent wavefields, such as marine seismic data that contain a receiver ghost or sea-surface-related multiples. We use simple synthetic examples to study the impact of source deghosting on sea-surface-related multiples. To resolve this limitation, we derive a method for simultaneous source deghosting and sea-surface-related demultiple for time-variant wavefields. Finally, we use the complex geologic model Sigsbee 2B first to illustrate that the source deghosting operation brings significant errors when applied to a data set containing sea-surface multiples. Second, we find that this problem can be resolved by simultaneously performing source deghosting and demultiple operations even in the presence of time-varying sea surfaces.


2017 ◽  
Vol 39 (6) ◽  
pp. 106-121
Author(s):  
A. O. Verpahovskaya ◽  
V. N. Pilipenko ◽  
Е. V. Pylypenko

2016 ◽  
Vol 33 (3) ◽  
Author(s):  
Lourenildo W.B. Leite ◽  
J. Mann ◽  
Wildney W.S. Vieira

ABSTRACT. The present case study results from a consistent processing and imaging of marine seismic data from a set collected over sedimentary basins of the East Brazilian Atlantic. Our general aim is... RESUMO. O presente artigo resulta de um processamento e imageamento consistentes de dados sísmicos marinhos de levantamento realizado em bacias sedimentares do Atlântico do Nordeste...


2019 ◽  
Author(s):  
Ian W.D. Dalziel ◽  
◽  
Robert Smalley ◽  
Lawrence A. Lawver ◽  
Demian Gomez ◽  
...  

2021 ◽  
Vol 11 (11) ◽  
pp. 4874
Author(s):  
Milan Brankovic ◽  
Eduardo Gildin ◽  
Richard L. Gibson ◽  
Mark E. Everett

Seismic data provides integral information in geophysical exploration, for locating hydrocarbon rich areas as well as for fracture monitoring during well stimulation. Because of its high frequency acquisition rate and dense spatial sampling, distributed acoustic sensing (DAS) has seen increasing application in microseimic monitoring. Given large volumes of data to be analyzed in real-time and impractical memory and storage requirements, fast compression and accurate interpretation methods are necessary for real-time monitoring campaigns using DAS. In response to the developments in data acquisition, we have created shifted-matrix decomposition (SMD) to compress seismic data by storing it into pairs of singular vectors coupled with shift vectors. This is achieved by shifting the columns of a matrix of seismic data before applying singular value decomposition (SVD) to it to extract a pair of singular vectors. The purpose of SMD is data denoising as well as compression, as reconstructing seismic data from its compressed form creates a denoised version of the original data. By analyzing the data in its compressed form, we can also run signal detection and velocity estimation analysis. Therefore, the developed algorithm can simultaneously compress and denoise seismic data while also analyzing compressed data to estimate signal presence and wave velocities. To show its efficiency, we compare SMD to local SVD and structure-oriented SVD, which are similar SVD-based methods used only for denoising seismic data. While the development of SMD is motivated by the increasing use of DAS, SMD can be applied to any seismic data obtained from a large number of receivers. For example, here we present initial applications of SMD to readily available marine seismic data.


Animals ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 50
Author(s):  
Jennifer Salau ◽  
Jan Henning Haas ◽  
Wolfgang Junge ◽  
Georg Thaller

Machine learning methods have become increasingly important in animal science, and the success of an automated application using machine learning often depends on the right choice of method for the respective problem and data set. The recognition of objects in 3D data is still a widely studied topic and especially challenging when it comes to the partition of objects into predefined segments. In this study, two machine learning approaches were utilized for the recognition of body parts of dairy cows from 3D point clouds, i.e., sets of data points in space. The low cost off-the-shelf depth sensor Microsoft Kinect V1 has been used in various studies related to dairy cows. The 3D data were gathered from a multi-Kinect recording unit which was designed to record Holstein Friesian cows from both sides in free walking from three different camera positions. For the determination of the body parts head, rump, back, legs and udder, five properties of the pixels in the depth maps (row index, column index, depth value, variance, mean curvature) were used as features in the training data set. For each camera positions, a k nearest neighbour classifier and a neural network were trained and compared afterwards. Both methods showed small Hamming losses (between 0.007 and 0.027 for k nearest neighbour (kNN) classification and between 0.045 and 0.079 for neural networks) and could be considered successful regarding the classification of pixel to body parts. However, the kNN classifier was superior, reaching overall accuracies 0.888 to 0.976 varying with the camera position. Precision and recall values associated with individual body parts ranged from 0.84 to 1 and from 0.83 to 1, respectively. Once trained, kNN classification is at runtime prone to higher costs in terms of computational time and memory compared to the neural networks. The cost vs. accuracy ratio for each methodology needs to be taken into account in the decision of which method should be implemented in the application.


Sign in / Sign up

Export Citation Format

Share Document