Automatic fault detection on seismic images using a multiscale attention convolutional neural network

Geophysics ◽  
2021 ◽  
pp. 1-99
Author(s):  
Kai Gao ◽  
Lianjie Huang ◽  
Yingcai Zheng ◽  
Rongrong Lin ◽  
Hao Hu ◽  
...  

High-fidelity fault detection on seismic images is one of the most important and challenging topics in the field of automatic seismic interpretation. Conventional hand-picking-based and semi-human-intervened fault detection approaches are being replaced by fully automatic methods thanks to the development of machine learning. We develop a novel multiscale attention convolutional neural network (MACNN for short) to improve machine-learning-based automatic end-to-end fault detection on seismic images. The most important characteristics of our MACNN fault detection method is that it employs a multiscale spatial-channel attention mechanism to merge and refine encoder feature maps of different spatial resolutions. The new architecture enables our MACNN to more effectively learn and exploit contextual information embedded in the encoder feature maps. We demonstrate through several synthetic data and field data examples that our MACNN tends to produce higher-resolution, higher-fidelity fault maps from complex seismic images compared with the conventional fault-detection convolutional neural network, thus leading to improved geological fidelity and interpretability of detected faults.

Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. IM35-IM45 ◽  
Author(s):  
Xinming Wu ◽  
Luming Liang ◽  
Yunzhi Shi ◽  
Sergey Fomel

Delineating faults from seismic images is a key step for seismic structural interpretation, reservoir characterization, and well placement. In conventional methods, faults are considered as seismic reflection discontinuities and are detected by calculating attributes that estimate reflection continuities or discontinuities. We consider fault detection as a binary image segmentation problem of labeling a 3D seismic image with ones on faults and zeros elsewhere. We have performed an efficient image-to-image fault segmentation using a supervised fully convolutional neural network. To train the network, we automatically create 200 3D synthetic seismic images and corresponding binary fault labeling images, which are shown to be sufficient to train a good fault segmentation network. Because a binary fault image is highly imbalanced between zeros (nonfault) and ones (fault), we use a class-balanced binary cross-entropy loss function to adjust the imbalance so that the network is not trained or converged to predict only zeros. After training with only the synthetic data sets, the network automatically learns to calculate rich and proper features that are important for fault detection. Multiple field examples indicate that the neural network (trained by only synthetic data sets) can predict faults from 3D seismic images much more accurately and efficiently than conventional methods. With a TITAN Xp GPU, the training processing takes approximately 2 h and predicting faults in a [Formula: see text] seismic volume takes only milliseconds.


IoT ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 222-235
Author(s):  
Guillaume Coiffier ◽  
Ghouthi Boukli Hacene ◽  
Vincent Gripon

Deep Neural Networks are state-of-the-art in a large number of challenges in machine learning. However, to reach the best performance they require a huge pool of parameters. Indeed, typical deep convolutional architectures present an increasing number of feature maps as we go deeper in the network, whereas spatial resolution of inputs is decreased through downsampling operations. This means that most of the parameters lay in the final layers, while a large portion of the computations are performed by a small fraction of the total parameters in the first layers. In an effort to use every parameter of a network at its maximum, we propose a new convolutional neural network architecture, called ThriftyNet. In ThriftyNet, only one convolutional layer is defined and used recursively, leading to a maximal parameter factorization. In complement, normalization, non-linearities, downsamplings and shortcut ensure sufficient expressivity of the model. ThriftyNet achieves competitive performance on a tiny parameters budget, exceeding 91% accuracy on CIFAR-10 with less than 40 k parameters in total, 74.3% on CIFAR-100 with less than 600 k parameters, and 67.1% On ImageNet ILSVRC 2012 with no more than 4.15 M parameters. However, the proposed method typically requires more computations than existing counterparts.


2021 ◽  
Author(s):  
Yu Hao ◽  
Biao Zhang ◽  
Xiaohua Wan ◽  
Rui Yan ◽  
Zhiyong Liu ◽  
...  

Motivation: Cryo-electron tomography (Cryo-ET) with sub-tomogram averaging (STA) is indispensable when studying macromolecule structures and functions in their native environments. However, current tomographic reconstructions suffer the low signal-to-noise (SNR) ratio and the missing wedge artifacts. Hence, automatic and accurate macromolecule localization and classification become the bottleneck problem for structural determination by STA. Here, we propose a 3D multi-scale dense convolutional neural network (MSDNet) for voxel-wise annotations of tomograms. Weighted focal loss is adopted as a loss function to solve the class imbalance. The proposed network combines 3D hybrid dilated convolutions (HDC) and dense connectivity to ensure an accurate performance with relatively few trainable parameters. 3D HDC expands the receptive field without losing resolution or learning extra parameters. Dense connectivity facilitates the re-use of feature maps to generate fewer intermediate feature maps and trainable parameters. Then, we design a 3D MSDNet based approach for fully automatic macromolecule localization and classification, called VP-Detector (Voxel-wise Particle Detector). VP-Detector is efficient because classification performs on the pre-calculated coordinates instead of a sliding window. Results: We evaluated the VP-Detector on simulated tomograms. Compared to the state-of-the-art methods, our method achieved a competitive performance on localization with the highest F1-score. We also demonstrated that the weighted focal loss improves the classification of hard classes. We trained the network on a part of training sets to prove the availability of training on relatively small datasets. Moreover, the experiment shows that VP-Detector has a fast particle detection speed, which costs less than 14 minutes on a test tomogram.


2021 ◽  
pp. 90-100
Author(s):  
M.O. Kuchma ◽  
◽  
V.V. Voronin ◽  
V.D. Bloshchinskiy ◽  
◽  
...  

We describe an algorithm based on a convolutional neural network that detects cloud and snow covers in satellite images. Algorithm accuracy was evaluated using machine learning metrics. The proposed algorithm is fully automatic


2019 ◽  
Vol 219 (3) ◽  
pp. 2097-2109 ◽  
Author(s):  
Xinming Wu ◽  
Luming Liang ◽  
Yunzhi Shi ◽  
Zhicheng Geng ◽  
Sergey Fomel

Summary Fault detection in a seismic image is a key step of structural interpretation. Structure-oriented smoothing with edge-preserving removes noise while enhancing seismic structures and sharpening structural edges in a seismic image, which, therefore, facilitates and accelerates the seismic structural interpretation. Estimating seismic normal vectors or reflection slopes is a basic step for many other seismic data processing tasks. All the three seismic image processing tasks are related to each other as they all involve the analysis of seismic structural features. In conventional seismic image processing schemes, however, these three tasks are often independently performed by different algorithms and challenges remain in each of them. We propose to simultaneously perform all the three tasks by using a single convolutional neural network (CNN). To train the network, we automatically create thousands of 3-D noisy synthetic seismic images and corresponding ground truth of fault images, clean seismic images and seismic normal vectors. Although trained with only the synthetic data sets, the network automatically learns to accurately perform all the three image processing tasks in a general seismic image. Multiple field examples show that the network is significantly superior to the conventional methods in all the three tasks of computing a more accurate and sharper fault detection, a smoothed seismic volume with better enhanced structures and structural edges, and more accurate seismic normal vectors or reflection slopes. Using a Titan Xp GPU, the training processing takes about 8 hr and the trained model takes only half a second to process a seismic volume with $128\, \times \, 128\, \times \, 128$ image samples.


2020 ◽  
Vol 9 (12) ◽  
pp. 4013
Author(s):  
Sebastian Ziegelmayer ◽  
Georgios Kaissis ◽  
Felix Harder ◽  
Friederike Jungmann ◽  
Tamara Müller ◽  
...  

The differentiation of autoimmune pancreatitis (AIP) and pancreatic ductal adenocarcinoma (PDAC) poses a relevant diagnostic challenge and can lead to misdiagnosis and consequently poor patient outcome. Recent studies have shown that radiomics-based models can achieve high sensitivity and specificity in predicting both entities. However, radiomic features can only capture low level representations of the input image. In contrast, convolutional neural networks (CNNs) can learn and extract more complex representations which have been used for image classification to great success. In our retrospective observational study, we performed a deep learning-based feature extraction using CT-scans of both entities and compared the predictive value against traditional radiomic features. In total, 86 patients, 44 with AIP and 42 with PDACs, were analyzed. Whole pancreas segmentation was automatically performed on CT-scans during the portal venous phase. The segmentation masks were manually checked and corrected if necessary. In total, 1411 radiomic features were extracted using PyRadiomics and 256 features (deep features) were extracted using an intermediate layer of a convolutional neural network (CNN). After feature selection and normalization, an extremely randomized trees algorithm was trained and tested using a two-fold shuffle-split cross-validation with a test sample of 20% (n = 18) to discriminate between AIP or PDAC. Feature maps were plotted and visual difference was noted. The machine learning (ML) model achieved a sensitivity, specificity, and ROC-AUC of 0.89 ± 0.11, 0.83 ± 0.06, and 0.90 ± 0.02 for the deep features and 0.72 ± 0.11, 0.78 ± 0.06, and 0.80 ± 0.01 for the radiomic features. Visualization of feature maps indicated different activation patterns for AIP and PDAC. We successfully trained a machine learning model using deep feature extraction from CT-images to differentiate between AIP and PDAC. In comparison to traditional radiomic features, deep features achieved a higher sensitivity, specificity, and ROC-AUC. Visualization of deep features could further improve the diagnostic accuracy of non-invasive differentiation of AIP and PDAC.


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


Author(s):  
Satoru Tsuiki ◽  
Takuya Nagaoka ◽  
Tatsuya Fukuda ◽  
Yuki Sakamoto ◽  
Fernanda R. Almeida ◽  
...  

Abstract Purpose In 2-dimensional lateral cephalometric radiographs, patients with severe obstructive sleep apnea (OSA) exhibit a more crowded oropharynx in comparison with non-OSA. We tested the hypothesis that machine learning, an application of artificial intelligence (AI), could be used to detect patients with severe OSA based on 2-dimensional images. Methods A deep convolutional neural network was developed (n = 1258; 90%) and tested (n = 131; 10%) using data from 1389 (100%) lateral cephalometric radiographs obtained from individuals diagnosed with severe OSA (n = 867; apnea hypopnea index > 30 events/h sleep) or non-OSA (n = 522; apnea hypopnea index < 5 events/h sleep) at a single center for sleep disorders. Three kinds of data sets were prepared by changing the area of interest using a single image: the original image without any modification (full image), an image containing a facial profile, upper airway, and craniofacial soft/hard tissues (main region), and an image containing part of the occipital region (head only). A radiologist also performed a conventional manual cephalometric analysis of the full image for comparison. Results The sensitivity/specificity was 0.87/0.82 for full image, 0.88/0.75 for main region, 0.71/0.63 for head only, and 0.54/0.80 for the manual analysis. The area under the receiver-operating characteristic curve was the highest for main region 0.92, for full image 0.89, for head only 0.70, and for manual cephalometric analysis 0.75. Conclusions A deep convolutional neural network identified individuals with severe OSA with high accuracy. Future research on this concept using AI and images can be further encouraged when discussing triage of OSA.


Author(s):  
Yao Wang ◽  
Linming Hou ◽  
Kamal Chandra Paul ◽  
Yunsheng Ban ◽  
Chen Chen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document