scholarly journals Building realistic structure models to train convolutional neural networks for seismic structural interpretation

Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA27-WA39 ◽  
Author(s):  
Xinming Wu ◽  
Zhicheng Geng ◽  
Yunzhi Shi ◽  
Nam Pham ◽  
Sergey Fomel ◽  
...  

Seismic structural interpretation involves highlighting and extracting faults and horizons that are apparent as geometric features in a seismic image. Although seismic image processing methods have been proposed to automate fault and horizon interpretation, each of which today still requires significant human effort. We improve automatic structural interpretation in seismic images by using convolutional neural networks (CNNs) that recently have shown excellent performances in detecting and extracting useful image features and objects. The main limitation of applying CNNs in seismic interpretation is the preparation of many training data sets and especially the corresponding geologic labels. Manually labeling geologic features in a seismic image is highly time-consuming and subjective, which often results in incompletely or inaccurately labeled training images. To solve this problem, we have developed a workflow to automatically build diverse structure models with realistic folding and faulting features. In this workflow, with some assumptions about typical folding and faulting patterns, we simulate structural features in a 3D model by using a set of parameters. By randomly choosing the parameters from some predefined ranges, we are able to automatically generate numerous structure models with realistic and diverse structural features. Based on these structure models with known structural information, we further automatically create numerous synthetic seismic images and the corresponding ground truth of structural labels to train CNNs for structural interpretation in field seismic images. Accurate results of structural interpretation in multiple field seismic images indicate that our workflow simulates realistic and generalized structure models from which the CNNs effectively learn to recognize real structures in field images.

2021 ◽  
Vol 7 ◽  
pp. e738
Author(s):  
Mumtaz Ali ◽  
Riaz Ali

Conventionally, convolutional neural networks (CNNs) have been used to identify and detect thorax diseases on chest x-ray images. To identify thorax diseases, CNNs typically learn two types of information: disease-specific features and generic anatomical features. CNNs focus on disease-specific features while ignoring the rest of the anatomical features during their operation. There is no evidence that generic anatomical features improve or worsen the performance of convolutional neural networks for thorax disease classification in the current research. As a consequence, the relevance of general anatomical features in boosting the performance of CNNs for thorax disease classification is investigated in this study. We employ a dual-stream CNN model to learn anatomical features before training the model for thorax disease classification. The dual-stream technique is used to compel the model to learn structural information because initial layers of CNNs often learn features of edges and boundaries. As a result, a dual-stream model with minimal layers learns structural and anatomical features as a priority. To make the technique more comprehensive, we first train the model to identify gender and age and then classify thorax diseases using the information acquired. Only when the model learns the anatomical features can it detect gender and age. We also use Non-negative Matrix Factorization (NMF) and Contrast Limited Adaptive Histogram Equalization (CLAHE) to pre-process the training data, which suppresses disease-related information while amplifying general anatomical features, allowing the model to acquire anatomical features considerably faster. Finally, the model that was earlier trained for gender and age detection is retrained for thorax disease classification using original data. The proposed technique increases the performance of convolutional neural networks for thorax disease classification, as per experiments on the Chest X-ray14 dataset. We can also see the significant parts of the image that contribute more for gender, age, and a certain thorax disease by visualizing the features. The proposed study achieves two goals: first, it produces novel gender and age identification results on chest X-ray images that may be used in biometrics, forensics, and anthropology, and second, it highlights the importance of general anatomical features in thorax disease classification. In comparison to state-of-the-art results, the proposed work also produces competitive results.


2021 ◽  
Author(s):  
Shreya Hardaha ◽  
Damodar Reddy ◽  
Saidi Reddy Parne

Abstract As of late, Convolutional Neural Networks have been very successful in segmentation and classification tasks. Magnetic resonance imaging (MRI) is a favored medical imaging method that comes up with interesting information for the diagnosis of different diseases.MR method is getting to be exceptionally well-known due to its non-invasive rule and for this reason, automated processing of this sort of image is getting noticed. MRI is effectively and widely used for tumor detection. Brain tumor detection is a popular medical application of MRI. Automating segmentation using CNN assists radiologists to lessen the high manual workload of tumor evaluation. CNN classification accuracy depends on network parameters and training data. CNN has the benefit of learning image features automatically directly out of multi-modal MRI images. In this survey paper, we have presented a summary of CNN's recent advancement in its technique applied on MRI images. The aim of this survey is to discuss various architectures and factors affecting the performance of CNN for learning features from different available MRI datasets. Based on the survey, section III (CNN for MRI analysis) comprises three subsections: A) MRI data and processing, B) CNN dimensionality, C) CNN architectures.


2021 ◽  
Vol 423 ◽  
pp. 639-650
Author(s):  
Tinghuai Ma ◽  
Hongmei Wang ◽  
Lejun Zhang ◽  
Yuan Tian ◽  
Najla Al-Nabhan

Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. N29-N40
Author(s):  
Modeste Irakarama ◽  
Paul Cupillard ◽  
Guillaume Caumon ◽  
Paul Sava ◽  
Jonathan Edwards

Structural interpretation of seismic images can be highly subjective, especially in complex geologic settings. A single seismic image will often support multiple geologically valid interpretations. However, it is usually difficult to determine which of those interpretations are more likely than others. We have referred to this problem as structural model appraisal. We have developed the use of misfit functions to rank and appraise multiple interpretations of a given seismic image. Given a set of possible interpretations, we compute synthetic data for each structural interpretation, and then we compare these synthetic data against observed seismic data; this allows us to assign a data-misfit value to each structural interpretation. Our aim is to find data-misfit functions that enable a ranking of interpretations. To do so, we formalize the problem of appraising structural interpretations using seismic data and we derive a set of conditions to be satisfied by the data-misfit function for a successful appraisal. We investigate vertical seismic profiling (VSP) and surface seismic configurations. An application of the proposed method to a realistic synthetic model shows promising results for appraising structural interpretations using VSP data, provided that the target region is well-illuminated. However, we find appraising structural interpretations using surface seismic data to be more challenging, mainly due to the difficulty of computing phase-shift data misfits.


Author(s):  
Y. A. Lumban-Gaol ◽  
K. A. Ohori ◽  
R. Y. Peters

Abstract. Satellite-Derived Bathymetry (SDB) has been used in many applications related to coastal management. SDB can efficiently fill data gaps obtained from traditional measurements with echo sounding. However, it still requires numerous training data, which is not available in many areas. Furthermore, the accuracy problem still arises considering the linear model could not address the non-relationship between reflectance and depth due to bottom variations and noise. Convolutional Neural Networks (CNN) offers the ability to capture the connection between neighbouring pixels and the non-linear relationship. These CNN characteristics make it compelling to be used for shallow water depth extraction. We investigate the accuracy of different architectures using different window sizes and band combinations. We use Sentinel-2 Level 2A images to provide reflectance values, and Lidar and Multi Beam Echo Sounder (MBES) datasets are used as depth references to train and test the model. A set of Sentinel-2 and in-situ depth subimage pairs are extracted to perform CNN training. The model is compared to the linear transform and applied to two other study areas. Resulting accuracy ranges from 1.3 m to 1.94 m, and the coefficient of determination reaches 0.94. The SDB model generated using a window size of 9x9 indicates compatibility with the reference depths, especially at areas deeper than 15 m. The addition of both short wave infrared bands to the four visible bands in training improves the overall accuracy of SDB. The implementation of the pre-trained model to other study areas provides similar results depending on the water conditions.


Author(s):  
N Seijdel ◽  
N Tsakmakidis ◽  
EHF De Haan ◽  
SM Bohte ◽  
HS Scholte

AbstractFeedforward deep convolutional neural networks (DCNNs) are, under specific conditions, matching and even surpassing human performance in object recognition in natural scenes. This performance suggests that the analysis of a loose collection of image features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Research in humans however suggests that while feedforward activity may suffice for sparse scenes with isolated objects, additional visual operations (‘routines’) that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicate that with an increase in network depth, there is an increase in the distinction between object- and background information. For more shallow networks, results indicated a benefit of training on segmented objects. Overall, these results indicate that, de facto, scene segmentation can be performed by a network of sufficient depth. We conclude that the human brain could perform scene segmentation in the context of object identification without an explicit mechanism, by selecting or “binding” features that belong to the object and ignoring other features, in a manner similar to a very deep convolutional neural network.


2018 ◽  
Author(s):  
Shori Nishimoto ◽  
Yuta Tokuoka ◽  
Takahiro G Yamada ◽  
Noriko F Hiroi ◽  
Akira Funahashi

SummaryImage-based deep learning systems, such as convolutional neural networks (CNNs), have recently been applied to cell classification, producing impressive results; however, application of CNNs has been confined to classification of the current cell state from the image. Here, we focused on cell movement where current and/or past cell shape can influence the future cell fate. We demonstrate that CNNs prospectively predicted the future direction of cell movement with high accuracy from a single image patch of a cell at a certain time. Furthermore, by visualizing the image features that were learned by the CNNs, we could identify morphological features, e.g., the protrusions and trailing edge that have been experimentally reported to determine the direction of cell movement. Our results indicate that CNNs have the potential to predict the future cell fate from current cell shape, and can be used to automatically identify those morphological features that influence future cell fate.


Geophysics ◽  
2021 ◽  
pp. 1-45
Author(s):  
Runhai Feng ◽  
Dario Grana ◽  
Niels Balling

Segmentation of faults based on seismic images is an important step in reservoir characterization. With the recent developments of deep-learning methods and the availability of massive computing power, automatic interpretation of seismic faults has become possible. The likelihood of occurrence for a fault can be quantified using a sigmoid function. Our goal is to quantify the fault model uncertainty that is generally not captured by deep-learning tools. We propose to use the dropout approach, a regularization technique to prevent overfitting and co-adaptation in hidden units, to approximate the Bayesian inference and estimate the principled uncertainty over functions. Particularly, the variance of the learned model has been decomposed into aleatoric and epistemic parts. The proposed method is applied to a real dataset from the Netherlands F3 block with two different dropout ratios in convolutional neural networks. The aleatoric uncertainty is irreducible since it relates to the stochastic dependency within the input observations. As the number of Monte-Carlo realizations increases, the epistemic uncertainty asymptotically converges and the model standard deviation decreases, because the variability of model parameters is better simulated or explained with a larger sample size. This analysis can quantify the confidence to use fault predictions with less uncertainty. Additionally, the analysis suggests where more training data are needed to reduce the uncertainty in low confidence regions.


Sign in / Sign up

Export Citation Format

Share Document