Machine learning-based fracture-hit detection algorithm using LFDAS signal

2019 ◽  
Vol 38 (7) ◽  
pp. 520-524 ◽  
Author(s):  
Ge Jin ◽  
Kevin Mendoza ◽  
Baishali Roy ◽  
Darryl G. Buswell

Low-frequency distributed acoustic sensing (LFDAS) signal has been used to detect fracture hits at offset monitor wells during hydraulic fracturing operations. Typically, fracture hits are manually identified, which can be subjective and inefficient. We implemented machine learning-based models using supervised learning techniques in order to identify fracture zones, which demonstrate a high probability of fracture hits automatically. Several features are designed and calculated from LFDAS data to highlight fracture-hit characterizations. A simple neural network model is trained to fit the manually picked fracture hits. The fracture-hit probability, as predicted by the model, agrees well with the manual picks in training, validation, and test data sets. The algorithm was used in a case study of an unconventional reservoir. The results indicate that smaller cluster spacing design creates denser fractures.

2021 ◽  
Author(s):  
Chinh Luu ◽  
Quynh Duy Bui ◽  
Romulus Costache ◽  
Luan Thanh Nguyen ◽  
Thu Thuy Nguyen ◽  
...  

Author(s):  
Gediminas Adomavicius ◽  
Yaqiong Wang

Numerical predictive modeling is widely used in different application domains. Although many modeling techniques have been proposed, and a number of different aggregate accuracy metrics exist for evaluating the overall performance of predictive models, other important aspects, such as the reliability (or confidence and uncertainty) of individual predictions, have been underexplored. We propose to use estimated absolute prediction error as the indicator of individual prediction reliability, which has the benefits of being intuitive and providing highly interpretable information to decision makers, as well as allowing for more precise evaluation of reliability estimation quality. As importantly, the proposed reliability indicator allows the reframing of reliability estimation itself as a canonical numeric prediction problem, which makes the proposed approach general-purpose (i.e., it can work in conjunction with any outcome prediction model), alleviates the need for distributional assumptions, and enables the use of advanced, state-of-the-art machine learning techniques to learn individual prediction reliability patterns directly from data. Extensive experimental results on multiple real-world data sets show that the proposed machine learning-based approach can significantly improve individual prediction reliability estimation as compared with a number of baselines from prior work, especially in more complex predictive scenarios.


2021 ◽  
pp. 1-67
Author(s):  
Stewart Smith ◽  
Olesya Zimina ◽  
Surender Manral ◽  
Michael Nickel

Seismic fault detection using machine learning techniques, in particular the convolution neural network (CNN), is becoming a widely accepted practice in the field of seismic interpretation. Machine learning algorithms are trained to mimic the capabilities of an experienced interpreter by recognizing patterns within seismic data and classifying them. Regardless of the method of seismic fault detection, interpretation or extraction of 3D fault representations from edge evidence or fault probability volumes is routine. Extracted fault representations are important to the understanding of the subsurface geology and are a critical input to upstream workflows including structural framework definition, static reservoir and petroleum system modeling, and well planning and de-risking activities. Efforts to automate the detection and extraction of geological features from seismic data have evolved in line with advances in computer algorithms, hardware, and machine learning techniques. We have developed an assisted fault interpretation workflow for seismic fault detection and extraction, demonstrated through a case study from the Groningen gas field of the Upper Permian, Dutch Rotliegend; a heavily faulted, subsalt gas field located onshore, NE Netherlands. Supervised using interpreter-led labeling, we apply a 2D multi-CNN to detect faults within a 3D pre-stack depth migrated seismic dataset. After prediction, we apply a geometric evaluation of predicted faults, using a principal component analysis (PCA) to produce geometric attribute representations (strike azimuth and planarity) of the fault prediction. Strike azimuth and planarity attributes are used to validate and automatically extract consistent 3D fault geometries, providing geological context to the interpreter and input to dependent workflows more efficiently.


2011 ◽  
Vol 16 (9) ◽  
pp. 1059-1067 ◽  
Author(s):  
Peter Horvath ◽  
Thomas Wild ◽  
Ulrike Kutay ◽  
Gabor Csucs

Imaging-based high-content screens often rely on single cell-based evaluation of phenotypes in large data sets of microscopic images. Traditionally, these screens are analyzed by extracting a few image-related parameters and use their ratios (linear single or multiparametric separation) to classify the cells into various phenotypic classes. In this study, the authors show how machine learning–based classification of individual cells outperforms those classical ratio-based techniques. Using fluorescent intensity and morphological and texture features, they evaluated how the performance of data analysis increases with increasing feature numbers. Their findings are based on a case study involving an siRNA screen monitoring nucleoplasmic and nucleolar accumulation of a fluorescently tagged reporter protein. For the analysis, they developed a complete analysis workflow incorporating image segmentation, feature extraction, cell classification, hit detection, and visualization of the results. For the classification task, the authors have established a new graphical framework, the Advanced Cell Classifier, which provides a very accurate high-content screen analysis with minimal user interaction, offering access to a variety of advanced machine learning methods.


Author(s):  
Manjunath K. E. ◽  
Yogeen S. Honnavar ◽  
Rakesh Pritmani ◽  
Sethuraman K.

The objective of this work is to develop methodologies to detect, and report the noncompliant images with respect to indian space research organisation (ISRO) recruitment requirements. The recruitment software hosted at U. R. rao satellite centre (URSC) is responsible for handling recruitment activities of ISRO. Large number of online applications are received for each post advertised. In many cases, it is observed that the candidates are uploading either wrong or non-compliant images of the required documents. By non-compliant images, we mean images which do not have faces or there is not enough clarity in the faces present in the images uploaded. In this work, we attempt to address two specific problems namely: 1) To recognise image uploaded to recruitment portal contains a human face or not. This is addressed using a face detection algorithm. 2) To check whether images uploaded by two or more applications are same or not. This is achieved by using machine learning (ML) algorithms to generate similarity score between two images, and then identify the duplicate images. Screening of valid applications becomes very challenging as the verification of such images using a manual process is very time consuming and requires large human efforts. Hence, we propose novel ML techniques to determine duplicate and non-face images in the applications received by the recruitment portal.


The Intrusion is a major threat to unauthorized data or legal network using the legitimate user identity or any of the back doors and vulnerabilities in the network. IDS mechanisms are developed to detect the intrusions at various levels. The objective of the research work is to improve the Intrusion Detection System performance by applying machine learning techniques based on decision trees for detection and classification of attacks. The methodology adapted will process the datasets in three stages. The experimentation is conducted on KDDCUP99 data sets based on number of features. The Bayesian three modes are analyzed for different sized data sets based upon total number of attacks. The time consumed by the classifier to build the model is analyzed and the accuracy is done.


Sign in / Sign up

Export Citation Format

Share Document