Automatic velocity analysis using convolutional neural network and transfer learning

Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. V33-V43 ◽  
Author(s):  
Min Jun Park ◽  
Mauricio D. Sacchi

Velocity analysis can be a time-consuming task when performed manually. Methods have been proposed to automate the process of velocity analysis, which, however, typically requires significant manual effort. We have developed a convolutional neural network (CNN) to estimate stacking velocities directly from the semblance. Our CNN model uses two images as one input data for training. One is an entire semblance (guide image), and the other is a small patch (target image) extracted from the semblance at a specific time step. Labels for each input data set are the root mean square velocities. We generate the training data set using synthetic data. After training the CNN model with synthetic data, we test the trained model with another synthetic data that were not used in the training step. The results indicate that the model can predict a consistent velocity model. We also noticed that when the input data are extremely different from those used for the training, the CNN model will hardly pick the correct velocities. In this case, we adopt transfer learning to update the trained model (base model) with a small portion of the target data to improve the accuracy of the predicted velocity model. A marine data set from the Gulf of Mexico is used for validating our new model. The updated model performed a reasonable velocity analysis in seconds.

2021 ◽  
Vol 40 (11) ◽  
pp. 831-836
Author(s):  
Aina Juell Bugge ◽  
Andreas K. Evensen ◽  
Jan Erik Lie ◽  
Espen H. Nilsen

Some of the key tasks in seismic processing involve suppressing multiples and noise that interfere with primary events. Conventional multiple attenuation on seismic prestack data is time-consuming and subjective. As an alternative, we propose model-driven processing using a convolutional neural network trained on synthetically modeled training data. The crucial part of our approach is to generate appropriate training data. Here, we compute a generic data set with pairs of synthetic gathers with and without multiples. Because we generate the primaries first and then add multiples, we ensure that we have perfect target data without any multiple energy. To compute generic and realistic training data, we include elements of wave propagation physics and implement a randomized flexibility of settings such as the wavelet, frequency content, degree of random noise, and amplitude variation with offset effects with each gather pair. A fully convolutional neural network is trained on the synthetic data in order to learn to suppress the noise and multiples. Evaluations of the approach on benchmark data indicate that our trained network is faster than conventional multiple attenuation because it can be run efficiently on a modern GPU, and it has the potential to better preserve primary amplitudes. Multiple removal with model-driven processing is demonstrated on seismic field data, and the results are compared to conventional multiple attenuation using a commercial Radon algorithm. The model-driven approach performs well when applied to real common-depth point gathers, and it successfully removes multiples, even where the multiples interfere with the primary signals on the near offsets.


2020 ◽  
Author(s):  
Sriram Srinivasan ◽  
Shashank A ◽  
vinayakumar R ◽  
Soman KP

In the present era, cyberspace is growing tremendously and the intrusion detection system (IDS) plays a key role in it to ensure information security. The IDS, which works in network and host level, should be capable of identifying various malicious attacks. The job of network-based IDS is to differentiate between normal and malicious traffic data and raise an alert in case of an attack. Apart from the traditional signature and anomaly-based approaches, many researchers have employed various deep learning (DL) techniques for detecting intrusion as DL models are capable of extracting salient features automatically from the input data. The application of deep convolutional neural network (DCNN), which is utilized quite often for solving research problems in image processing and vision fields, is not explored much for IDS. In this paper, a DCNN architecture for IDS which is trained on KDDCUP 99 data set is proposed. This work also shows that the DCNN-IDS model performs superior when compared with other existing works.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


2020 ◽  
Author(s):  
Sriram Srinivasan ◽  
Shashank A ◽  
vinayakumar R ◽  
Soman KP

In the present era, cyberspace is growing tremendously and the intrusion detection system (IDS) plays a key role in it to ensure information security. The IDS, which works in network and host level, should be capable of identifying various malicious attacks. The job of network-based IDS is to differentiate between normal and malicious traffic data and raise an alert in case of an attack. Apart from the traditional signature and anomaly-based approaches, many researchers have employed various deep learning (DL) techniques for detecting intrusion as DL models are capable of extracting salient features automatically from the input data. The application of deep convolutional neural network (DCNN), which is utilized quite often for solving research problems in image processing and vision fields, is not explored much for IDS. In this paper, a DCNN architecture for IDS which is trained on KDDCUP 99 data set is proposed. This work also shows that the DCNN-IDS model performs superior when compared with other existing works.


Geophysics ◽  
2005 ◽  
Vol 70 (1) ◽  
pp. S1-S17 ◽  
Author(s):  
Alison E. Malcolm ◽  
Maarten V. de Hoop ◽  
Jérôme H. Le Rousseau

Reflection seismic data continuation is the computation of data at source and receiver locations that differ from those in the original data, using whatever data are available. We develop a general theory of data continuation in the presence of caustics and illustrate it with three examples: dip moveout (DMO), azimuth moveout (AMO), and offset continuation. This theory does not require knowledge of the reflector positions. We construct the output data set from the input through the composition of three operators: an imaging operator, a modeling operator, and a restriction operator. This results in a single operator that maps directly from the input data to the desired output data. We use the calculus of Fourier integral operators to develop this theory in the presence of caustics. For both DMO and AMO, we compute impulse responses in a constant-velocity model and in a more complicated model in which caustics arise. This analysis reveals errors that can be introduced by assuming, for example, a model with a constant vertical velocity gradient when the true model is laterally heterogeneous. Data continuation uses as input a subset (common offset, common angle) of the available data, which may introduce artifacts in the continued data. One could suppress these artifacts by stacking over a neighborhood of input data (using a small range of offsets or angles, for example). We test data continuation on synthetic data from a model known to generate imaging artifacts. We show that stacking over input scattering angles suppresses artifacts in the continued data.


2020 ◽  
Vol 83 (6) ◽  
pp. 602-614
Author(s):  
Hidir Selcuk Nogay ◽  
Hojjat Adeli

<b><i>Introduction:</i></b> The diagnosis of epilepsy takes a certain process, depending entirely on the attending physician. However, the human factor may cause erroneous diagnosis in the analysis of the EEG signal. In the past 2 decades, many advanced signal processing and machine learning methods have been developed for the detection of epileptic seizures. However, many of these methods require large data sets and complex operations. <b><i>Methods:</i></b> In this study, an end-to-end machine learning model is presented for detection of epileptic seizure using the pretrained deep two-dimensional convolutional neural network (CNN) and the concept of transfer learning. The EEG signal is converted directly into visual data with a spectrogram and used directly as input data. <b><i>Results:</i></b> The authors analyzed the results of the training of the proposed pretrained AlexNet CNN model. Both binary and ternary classifications were performed without any extra procedure such as feature extraction. By performing data set creation from short-term spectrogram graphic images, the authors were able to achieve 100% accuracy for binary classification for epileptic seizure detection and 100% for ternary classification. <b><i>Discussion/Conclusion:</i></b> The proposed automatic identification and classification model can help in the early diagnosis of epilepsy, thus providing the opportunity for effective early treatment.


Geophysics ◽  
2011 ◽  
Vol 76 (5) ◽  
pp. WB191-WB207 ◽  
Author(s):  
Yaxun Tang ◽  
Biondo Biondi

We present a new strategy for efficient wave-equation migration-velocity analysis in complex geological settings. The proposed strategy has two main steps: simulating a new data set using an initial unfocused image and performing wavefield-based tomography using this data set. We demonstrated that the new data set can be synthesized by using generalized Born wavefield modeling for a specific target region where velocities are inaccurate. We also showed that the new data set can be much smaller than the original one because of the target-oriented modeling strategy, but it contains necessary velocity information for successful velocity analysis. These interesting features make this new data set suitable for target-oriented, fast and interactive velocity model-building. We demonstrate the performance of our method on both a synthetic data set and a field data set acquired from the Gulf of Mexico, where we update the subsalt velocity in a target-oriented fashion and obtain a subsalt image with improved continuities, signal-to-noise ratio and flattened angle-domain common-image gathers.


The project “Disease Prediction Model” focuses on predicting the type of skin cancer. It deals with constructing a Convolutional Neural Network(CNN) sequential model in order to find the type of a skin cancer which takes a huge troll on mankind well-being. Since development of programmed methods increases the accuracy at high scale for identifying the type of skin cancer, we use Convolutional Neural Network, CNN algorithm in order to build our model . For this we make use of a sequential model. The data set that we have considered for this project is collected from NCBI, which is well known as HAM10000 dataset, it consists of massive amounts of information regarding several dermatoscopic images of most trivial pigmented lesions of skin which are collected from different sufferers. Once the dataset is collected, cleaned, it is split into training and testing data sets. We used CNN to build our model and using the training data we trained the model , later using the testing data we tested the model. Once the model is implemented over the testing data, plots are made in order to analyze the relation between the echos and loss function. It is also used to analyse accuracy and echos for both training and testing data.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xieyi Chen ◽  
Dongyun Wang ◽  
Jinjun Shao ◽  
Jun Fan

To automatically detect plastic gasket defects, a set of plastic gasket defect visual detection devices based on GoogLeNet Inception-V2 transfer learning was designed and established in this study. The GoogLeNet Inception-V2 deep convolutional neural network (DCNN) was adopted to extract and classify the defect features of plastic gaskets to solve the problem of their numerous surface defects and difficulty in extracting and classifying the features. Deep learning applications require a large amount of training data to avoid model overfitting, but there are few datasets of plastic gasket defects. To address this issue, data augmentation was applied to our dataset. Finally, the performance of the three convolutional neural networks was comprehensively compared. The results showed that the GoogLeNet Inception-V2 transfer learning model had a better performance in less time. It means it had higher accuracy, reliability, and efficiency on the dataset used in this paper.


2020 ◽  
Vol 13 (6) ◽  
pp. 2631-2644 ◽  
Author(s):  
Georgy Ayzel ◽  
Tobias Scheffer ◽  
Maik Heistermann

Abstract. In this study, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. Its design was inspired by the U-Net and SegNet families of deep learning models, which were originally designed for binary segmentation tasks. RainNet was trained to predict continuous precipitation intensities at a lead time of 5 min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900 km×900 km and has a resolution of 1 km in space and 5 min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In order to achieve a lead time of 1 h, a recursive approach was implemented by using RainNet predictions at 5 min lead times as model inputs for longer lead times. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the rainymotion library and had previously been shown to outperform DWD's operational nowcasting model for the same set of verification events. RainNet significantly outperforms the benchmark models at all lead times up to 60 min for the routine verification metrics mean absolute error (MAE) and the critical success index (CSI) at intensity thresholds of 0.125, 1, and 5 mm h−1. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15 mm h−1). The limited ability of RainNet to predict heavy rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5 min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16 km and below. Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5 min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5 min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance in terms of a binary segmentation task. Furthermore, we suggest additional input data that could help to better identify situations with imminent precipitation dynamics. The model code, pretrained weights, and training data are provided in open repositories as an input for such future studies.


Sign in / Sign up

Export Citation Format

Share Document