Use of multiattribute transforms to predict log properties from seismic data

Geophysics ◽  
2001 ◽  
Vol 66 (1) ◽  
pp. 220-236 ◽  
Author(s):  
Daniel P. Hampson ◽  
James S. Schuelke ◽  
John A. Quirein

We describe a new method for predicting well‐log properties from seismic data. The analysis data consist of a series of target logs from wells which tie a 3-D seismic volume. The target logs theoretically may be of any type; however, the greatest success to date has been in predicting porosity logs. From the 3-D seismic volume a series of sample‐based attributes is calculated. The objective is to derive a multiattribute transform, which is a linear or nonlinear transform between a subset of the attributes and the target log values. The selected subset is determined by a process of forward stepwise regression, which derives increasingly larger subsets of attributes. An extension of conventional crossplotting involves the use of a convolutional operator to resolve frequency differences between the target logs and the seismic data. In the linear mode, the transform consists of a series of weights derived by least‐squares minimization. In the nonlinear mode, a neural network is trained, using the selected attributes as inputs. Two types of neural networks have been evaluated: the multilayer feedforward network (MLFN) and the probabilistic neural network (PNN). Because of its mathematical simplicity, the PNN appears to be the network of choice. To estimate the reliability of the derived multiattribute transform, crossvalidation is used. In this process, each well is systematically removed from the training set, and the transform is rederived from the remaining wells. The prediction error for the hidden well is then calculated. The validation error, which is the average error for all hidden wells, is used as a measure of the likely prediction error when the transform is applied to the seismic volume. The method is applied to two real data sets. In each case, we see a continuous improvement in predictive power as we progress from single‐attribute regression to linear multiattribute prediction to neural network prediction. This improvement is evident not only on the training data but, more importantly, on the validation data. In addition, the neural network shows a significant improvement in resolution over that from linear regression.

Geophysics ◽  
2021 ◽  
pp. 1-63
Author(s):  
Wenqian Fang ◽  
Lihua Fu ◽  
Shaoyong Liu ◽  
Hongwei Li

Deep learning (DL) technology has emerged as a new approach for seismic data interpolation. DL-based methods can automatically learn the mapping between regularly subsampled and complete data from a large training dataset. Subsequently, the trained network can be used to directly interpolate new data. Therefore, compared with traditional methods, DL-based methods reduce the manual workload and render the interpolation process efficient and automatic by avoiding the selection of hyperparameters. However, two limitations of DL-based approaches exist. First, the generalization performance of the neural network is inadequate when processing new data with a different structure compared to the training data. Second, the interpretation of the trained networks is very difficult. To overcome these limitations, we combine the deep neural network and classic prediction-error filter methods, proposing a novel seismic data de-aliased interpolation framework termed PEFNet (Prediction-Error Filters Network). The PEFNet designs convolutional neural networks to learn the relationship between the subsampled data and the prediction-error filters. Thus, the filters estimated by the trained network are used for the recovery of missing traces. The learning of filters enables the network to better extract the local dip of seismic data and has a good generalization ability. In addition, PEFNet has the same interpretability as traditional prediction error-filter based methods. The applicability and the effectiveness of the proposed method are demonstrated here by synthetic and field data examples.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
S Gao ◽  
D Stojanovski ◽  
A Parker ◽  
P Marques ◽  
S Heitner ◽  
...  

Abstract Background Correctly identifying views acquired in a 2D echocardiographic examination is paramount to post-processing and quantification steps often performed as part of most clinical workflows. In many exams, particularly in stress echocardiography, microbubble contrast is used which greatly affects the appearance of the cardiac views. Here we present a bespoke, fully automated convolutional neural network (CNN) which identifies apical 2, 3, and 4 chamber, and short axis (SAX) views acquired with and without contrast. The CNN was tested in a completely independent, external dataset with the data acquired in a different country than that used to train the neural network. Methods Training data comprised of 2D echocardiograms was taken from 1014 subjects from a prospective multisite, multi-vendor, UK trial with the number of frames in each view greater than 17,500. Prior to view classification model training, images were processed using standard techniques to ensure homogenous and normalised image inputs to the training pipeline. A bespoke CNN was built using the minimum number of convolutional layers required with batch normalisation, and including dropout for reducing overfitting. Before processing, the data was split into 90% for model training (211,958 frames), and 10% used as a validation dataset (23,946 frames). Image frames from different subjects were separated out entirely amongst the training and validation datasets. Further, a separate trial dataset of 240 studies acquired in the USA was used as an independent test dataset (39,401 frames). Results Figure 1 shows the confusion matrices for both validation data (left) and independent test data (right), with an overall accuracy of 96% and 95% for the validation and test datasets respectively. The accuracy for the non-contrast cardiac views of >99% exceeds that seen in other works. The combined datasets included images acquired across ultrasound manufacturers and models from 12 clinical sites. Conclusion We have developed a CNN capable of automatically accurately identifying all relevant cardiac views used in “real world” echo exams, including views acquired with contrast. Use of the CNN in a routine clinical workflow could improve efficiency of quantification steps performed after image acquisition. This was tested on an independent dataset acquired in a different country to that used to train the model and was found to perform similarly thus indicating the generalisability of the model. Figure 1. Confusion matrices Funding Acknowledgement Type of funding source: Private company. Main funding source(s): Ultromics Ltd.


2021 ◽  
Author(s):  
Yash Chauhan ◽  
Prateek Singh

Coins recognition systems have humungous applications from vending and slot machines to banking and management firms which directly translate to a high volume of research regarding the development of methods for such classification. In recent years, academic research has shifted towards a computer vision approach for sorting coins due to the advancement in the field of deep learning. However, most of the documented work utilizes what is known as ‘Transfer Learning’ in which we reuse a pre-trained model of a fixed architecture as a starting point for our training. While such an approach saves us a lot of time and effort, the generic nature of the pre-trained model can often become a bottleneck for performance on a specialized problem such as coin classification. This study develops a convolutional neural network (CNN) model from scratch and tests it against a widely-used general-purpose architecture known as Googlenet. We have shown in this study by comparing the performance of our model with that of Googlenet (documented in various previous studies) that a more straightforward and specialized architecture is more optimal than a more complex general architecture for the coin classification problem. The model developed in this study is trained and tested on 720 and 180 images of Indian coins of different denominations, respectively. The final accuracy gained by the model is 91.62% on the training data, while the accuracy is 90.55% on the validation data.


Jurnal INFORM ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. 61-64
Author(s):  
Mohammad Zoqi Sarwani ◽  
Dian Ahkam Sani

The Internet creates a new space where people can interact and communicate efficiently. Social media is one type of media used to interact on the internet. Facebook and Twitter are one of the social media. Many people are not aware of bringing their personal life into the public. So that unconsciously provides information about his personality. Big Five personality is one type of personality assessment method and is used as a reference in this study. The data used is the social media status from both Facebook and Twitter. Status has been taken from 50 social media users. Each user is taken as a text status. The results of tests performed using the Probabilistic Neural Network algorithm obtained an average accuracy score of 86.99% during the training process and 83.66% at the time of testing with a total of 30 training data and 20 test data.


2021 ◽  
Vol 73 (02) ◽  
pp. 68-69
Author(s):  
Chris Carpenter

This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 200577, “Applications of Artificial Neural Networks for Seismic Facies Classification: A Case Study From the Mid-Cretaceous Reservoir in a Supergiant Oil Field,” by Ali Al-Ali, Karl Stephen, SPE, and Asghar Shams, Heriot-Watt University, prepared for the 2020 SPE Europec featured at the 82nd EAGE Conference and Exhibition, originally scheduled to be held in Amsterdam, 1-3 December. The paper has not been peer reviewed. Facies classification using data from sources such as wells and outcrops cannot capture all reservoir characterization in the interwell region. Therefore, as an alternative approach, seismic facies classification schemes are applied to reduce the uncertainties in the reservoir model. In this study, a machine-learning neural network was introduced to predict the lithology required for building a full-field Earth model for carbonate reservoirs in southern Iraq. The work and the methodology provide a significant improvement in facies classification and reveal the capability of a probabilistic neural network technique. Introduction The use of machine learning in seismic facies classification has increased gradually during the past decade in the interpretation of 3D and 4D seismic volumes and reservoir characterization work flows. The complete paper provides a literature review regarding this topic. Previously, seismic reservoir characterization has revealed the heterogeneity of the Mishrif reservoir and its distribution in terms of the pore system and the structural model. However, the main objective of this work is to classify and predict the heterogeneous facies of the carbonate Mishrif reservoir in a giant oil field using a multilayer feed-forward network (MLFN) and a probabilistic neural network (PNN) in nonlinear facies classification techniques. A related objective was to find any domain-specific causal relationships among input and output variables. These two methods have been applied to classify and predict the presence of different facies in Mishrif reservoir rock types. Case Study Reservoir and Data Set Description. The West Qurna field is a giant, multibillion-barrel oil field in the southern Mesopotamian Basin with multiple carbonate and clastic reservoirs. The overall structure of the field is a north/south trending anticline steep on the western flank and gentle on the eastern flank. Many producing reservoirs developed in this oil field; however, the Mid- Cretaceous Mishrif reservoir is the main producing reservoir. The reservoir consists of thick carbonate strata (roughly 250 m) deposited on a shallow water platform adjacent to more-distal, deeper-water nonreservoir carbonate facies developing into three stratigraphic sequence units in the second order. Mishrif facies are characterized by a porosity greater than 20% and large permeability contrast from grainstones to microporosity (10-1000 md). The first full-field 3D seismic data set was achieved over 500 km2 during 2012 and 2013 in order to plan the development of all field reservoirs. A de-tailed description of the reservoir has been determined from well logs and core and seismic data. This study is mainly based on facies log (22 wells) and high-resolution 3D seismic volume to generate seismic attributes as the input data for the training of the neural network model. The model is used to evaluate lithofacies in wells without core data but with appropriate facies logs. Also, testing was carried out in parallel with the core data to verify the results of facies classification.


Energies ◽  
2020 ◽  
Vol 13 (12) ◽  
pp. 3074 ◽  
Author(s):  
Shulin Pan ◽  
Ke Yan ◽  
Haiqiang Lan ◽  
José Badal ◽  
Ziyu Qin

Conventional sparse spike deconvolution algorithms that are based on the iterative shrinkage-thresholding algorithm (ISTA) are widely used. The aim of this type of algorithm is to obtain accurate seismic wavelets. When this is not fulfilled, the processing stops being optimum. Using a recurrent neural network (RNN) as deep learning method and applying backpropagation to ISTA, we have developed an RNN-like ISTA as an alternative sparse spike deconvolution algorithm. The algorithm is tested with both synthetic and real seismic data. The algorithm first builds a training dataset from existing well-logs seismic data and then extracts wavelets from those seismic data for further processing. Based on the extracted wavelets, the new method uses ISTA to calculate the reflection coefficients. Next, inspired by the backpropagation through time (BPTT) algorithm, backward error correction is performed on the wavelets while using the errors between the calculated reflection coefficients and the reflection coefficients corresponding to the training dataset. Finally, after performing backward correction over multiple iterations, a set of acceptable seismic wavelets is obtained, which is then used to deduce the sequence of reflection coefficients of the real data. The new algorithm improves the accuracy of the deconvolution results by reducing the effect of wrong seismic wavelets that are given by conventional ISTA. In this study, we account for the mechanism and the derivation of the proposed algorithm, and verify its effectiveness through experimentation using theoretical and real data.


2019 ◽  
Vol 7 (3) ◽  
pp. SE269-SE280
Author(s):  
Xu Si ◽  
Yijun Yuan ◽  
Tinghua Si ◽  
Shiwen Gao

Random noise often contaminates seismic data and reduces its signal-to-noise ratio. Therefore, the removal of random noise has been an essential step in seismic data processing. The [Formula: see text]-[Formula: see text] predictive filtering method is one of the most widely used methods in suppressing random noise. However, when the subsurface structure becomes complex, this method suffers from higher prediction errors owing to the large number of different dip components that need to be predicted. Here, we used a denoising convolutional neural network (DnCNN) algorithm to attenuate random noise in seismic data. This method does not assume the linearity and stationarity of the signal in the conventional [Formula: see text]-[Formula: see text] domain prediction technique, and it involves creating a set of training data that are obtained by data processing, feeding the neural network with the training data obtained, and deep network learning and training. During deep network learning and training, the activation function and batch normalization are used to solve the gradient vanishing and gradient explosion problems, and the residual learning technique is used to improve the calculation precision, respectively. After finishing deep network learning and training, the network will have the ability to separate the residual image from the seismic data with noise. Then, clean images can be obtained by subtracting the residual image from the raw data with noise. Tests on the synthetic and real data demonstrate that the DnCNN algorithm is very effective for random noise attenuation in seismic data.


Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. B403-B417 ◽  
Author(s):  
Hao Wu ◽  
Bo Zhang ◽  
Tengfei Lin ◽  
Danping Cao ◽  
Yihuai Lou

The seismic horizon is a critical input for the structure and stratigraphy modeling of reservoirs. It is extremely hard to automatically obtain an accurate horizon interpretation for seismic data in which the lateral continuity of reflections is interrupted by faults and unconformities. The process of seismic horizon interpretation can be viewed as segmenting the seismic traces into different parts and each part is a unique object. Thus, we have considered the horizon interpretation as an object detection problem. We use the encoder-decoder convolutional neural network (CNN) to detect the “objects” contained in the seismic traces. The boundary of the objects is regarded as the horizons. The training data are the seismic traces located on a user-defined coarse grid. We give a unique training label to the time window of seismic traces bounded by two manually picked horizons. To efficiently learn the waveform pattern that is bounded by two adjacent horizons, we use variable sizes for the convolution filters, which is different than current CNN-based image segmentation methods. Two field data examples demonstrate that our method is capable of producing accurate horizons across the fault surface and near the unconformity which is beyond the current capability of horizon picking method.


2019 ◽  
Vol 30 (1) ◽  
pp. 45-66 ◽  
Author(s):  
Anette Rantanen ◽  
Joni Salminen ◽  
Filip Ginter ◽  
Bernard J. Jansen

Purpose User-generated social media comments can be a useful source of information for understanding online corporate reputation. However, the manual classification of these comments is challenging due to their high volume and unstructured nature. The purpose of this paper is to develop a classification framework and machine learning model to overcome these limitations. Design/methodology/approach The authors create a multi-dimensional classification framework for the online corporate reputation that includes six main dimensions synthesized from prior literature: quality, reliability, responsibility, successfulness, pleasantness and innovativeness. To evaluate the classification framework’s performance on real data, the authors retrieve 19,991 social media comments about two Finnish banks and use a convolutional neural network (CNN) to classify automatically the comments based on manually annotated training data. Findings After parameter optimization, the neural network achieves an accuracy between 52.7 and 65.2 percent on real-world data, which is reasonable given the high number of classes. The findings also indicate that prior work has not captured all the facets of online corporate reputation. Practical implications For practical purposes, the authors provide a comprehensive classification framework for online corporate reputation, which companies and organizations operating in various domains can use. Moreover, the authors demonstrate that using a limited amount of training data can yield a satisfactory multiclass classifier when using CNN. Originality/value This is the first attempt at automatically classifying online corporate reputation using an online-specific classification framework.


Sign in / Sign up

Export Citation Format

Share Document