Successful leveraging of image processing and machine learning in seismic structural interpretation: A review

2018 ◽  
Vol 37 (6) ◽  
pp. 451-461 ◽  
Author(s):  
Zhen Wang ◽  
Haibin Di ◽  
Muhammad Amir Shafiq ◽  
Yazeed Alaudah ◽  
Ghassan AlRegib

As a process that identifies geologic structures of interest such as faults, salt domes, or elements of petroleum systems in general, seismic structural interpretation depends heavily on the domain knowledge and experience of interpreters as well as visual cues of geologic structures, such as texture and geometry. With the dramatic increase in size of seismic data acquired for hydrocarbon exploration, structural interpretation has become more time consuming and labor intensive. By treating seismic data as images rather than signal traces, researchers have been able to utilize advanced image-processing and machine-learning techniques to assist interpretation directly. In this paper, we mainly focus on the interpretation of two important geologic structures, faults and salt domes, and summarize interpretation workflows based on typical or advanced image-processing and machine-learning algorithms. In recent years, increasing computational power and the massive amount of available data have led to the rise of deep learning. Deep-learning models that simulate the human brain's biological neural networks can achieve state-of-the-art accuracy and even exceed human-level performance on numerous applications. The convolutional neural network — a form of deep-learning model that is effective in analyzing visual imagery — has been applied in fault and salt dome interpretation. At the end of this review, we provide insight and discussion on the future of structural interpretation.

Author(s):  
Anna Ferrari ◽  
Daniela Micucci ◽  
Marco Mobilio ◽  
Paolo Napoletano

AbstractHuman activity recognition (HAR) is a line of research whose goal is to design and develop automatic techniques for recognizing activities of daily living (ADLs) using signals from sensors. HAR is an active research filed in response to the ever-increasing need to collect information remotely related to ADLs for diagnostic and therapeutic purposes. Traditionally, HAR used environmental or wearable sensors to acquire signals and relied on traditional machine-learning techniques to classify ADLs. In recent years, HAR is moving towards the use of both wearable devices (such as smartphones or fitness trackers, since they are daily used by people and they include reliable inertial sensors), and deep learning techniques (given the encouraging results obtained in the area of computer vision). One of the major challenges related to HAR is population diversity, which makes difficult traditional machine-learning algorithms to generalize. Recently, researchers successfully attempted to address the problem by proposing techniques based on personalization combined with traditional machine learning. To date, no effort has been directed at investigating the benefits that personalization can bring in deep learning techniques in the HAR domain. The goal of our research is to verify if personalization applied to both traditional and deep learning techniques can lead to better performance than classical approaches (i.e., without personalization). The experiments were conducted on three datasets that are extensively used in the literature and that contain metadata related to the subjects. AdaBoost is the technique chosen for traditional machine learning, while convolutional neural network is the one chosen for deep learning. These techniques have shown to offer good performance. Personalization considers both the physical characteristics of the subjects and the inertial signals generated by the subjects. Results suggest that personalization is most effective when applied to traditional machine-learning techniques rather than to deep learning ones. Moreover, results show that deep learning without personalization performs better than any other methods experimented in the paper in those cases where the number of training samples is high and samples are heterogeneous (i.e., they represent a wider spectrum of the population). This suggests that traditional deep learning can be more effective, provided you have a large and heterogeneous dataset, intrinsically modeling the population diversity in the training process.


Water ◽  
2020 ◽  
Vol 12 (11) ◽  
pp. 2996
Author(s):  
Georgios Etsias ◽  
Gerard A. Hamill ◽  
Eric M. Benner ◽  
Jesús F. Águila ◽  
Mark C. McDonnell ◽  
...  

Deriving saltwater concentrations from the light intensity values of dyed saline solutions is a long-established image processing practice in laboratory scale investigations of saline intrusion. The current paper presents a novel methodology that employs the predictive ability of machine learning algorithms in order to determine saltwater concentration fields. The proposed approach consists of three distinct parts, image pre-processing, porous medium classification (glass bead structure recognition) and saltwater field generation (regression). It minimizes the need for aquifer-specific calibrations, significantly shortening the experimental procedure by up to 50% of the time required. A series of typical saline intrusion experiments were conducted in homogeneous and heterogeneous aquifers, consisting of glass beads of varying sizes, to recreate the necessary laboratory data. An innovative method of distinguishing and filtering out the common experimental error introduced by both backlighting and the optical irregularities of the glass bead medium was formulated. This enabled the acquisition of quality predictions by classical, easy-to-use machine learning techniques, such as feedforward Artificial Neural Networks, using a limited amount of training data, proving the applicability of the procedure. The new process was benchmarked against a traditional regression algorithm. A series of variables were utilized to quantify the variance between the results generated by the two procedures. No compromise was found to the quality of the derived concentration fields and it was established that the proposed image processing technique is robust when applied to homogeneous and heterogeneous domains alike, outperforming the classical approach in all test cases. Moreover, the method minimized the impact of experimental errors introduced by small movements of the camera and the presence air bubbles trapped in the porous medium.


Author(s):  
R Kanthavel Et.al

Osteoarthritis is mainly a familiar kind of arthritis when an elastic tissue named Cartilage that softens the tops of the bones, cracks down. The Person with osteoarthritis can encompass joint pain, inflexibility, or inflammation and there is no particular examination for osteoarthritis and physicians take the amalgamation of both medical cum clinical record and X-rays imaging analysis to make a diagnosis of the state. Osteoarthritis is generally only detected following ache and bone scratch and in advance, analysis could permit for ultimate involvement to avoid cartilage worsening and bone injury. Through machine-learning algorithms, the system can be trained to automatically distinguish among people who would develop osteoarthritis and persons who would not with the detection of exact biochemical variances in the midpoint of the knee’s cartilage. The outcome of the Machine learning Techniques will give the persons who are pre-symptomatic by the occasion of the baseline imaging and also the reduction in liquid concentration. In this study, we present the analysis of various deep learning techniques for timely detection of osteoarthritis disease. Several subsets of machine learning called deep learning techniques have been in use for the timely detection of osteoarthritis disease; and therefore analysis is needed highly to choose the best as far as accuracy and reliability are concerned.


2021 ◽  
Vol 11 (4) ◽  
pp. 286-290
Author(s):  
Md. Golam Kibria ◽  
◽  
Mehmet Sevkli

The increased credit card defaulters have forced the companies to think carefully before the approval of credit applications. Credit card companies usually use their judgment to determine whether a credit card should be issued to the customer satisfying certain criteria. Some machine learning algorithms have also been used to support the decision. The main objective of this paper is to build a deep learning model based on the UCI (University of California, Irvine) data sets, which can support the credit card approval decision. Secondly, the performance of the built model is compared with the other two traditional machine learning algorithms: logistic regression (LR) and support vector machine (SVM). Our results show that the overall performance of our deep learning model is slightly better than that of the other two models.


Author(s):  
Anisha M. Lal ◽  
B. Koushik Reddy ◽  
Aju D.

Machine learning can be defined as the ability of a computer to learn and solve a problem without being explicitly coded. The efficiency of the program increases with experience through the task specified. In traditional programming, the program and the input are specified to get the output, but in the case of machine learning, the targets and predictors are provided to the algorithm make the process trained. This chapter focuses on various machine learning techniques and their performance with commonly used datasets. A supervised learning algorithm consists of a target variable that is to be predicted from a given set of predictors. Using these established targets is a function that plots targets to a given set of predictors. The training process allows the system to train the unknown data and continues until the model achieves a desired level of accuracy on the training data. The supervised methods can be usually categorized as classification and regression. This chapter discourses some of the popular supervised machine learning algorithms and their performances using quotidian datasets. This chapter also discusses some of the non-linear regression techniques and some insights on deep learning with respect to object recognition.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4583 ◽  
Author(s):  
Vibekananda Dutta ◽  
Michał Choraś ◽  
Marek Pawlicki ◽  
Rafał Kozik

Currently, expert systems and applied machine learning algorithms are widely used to automate network intrusion detection. In critical infrastructure applications of communication technologies, the interaction among various industrial control systems and the Internet environment intrinsic to the IoT technology makes them susceptible to cyber-attacks. Given the existence of the enormous network traffic in critical Cyber-Physical Systems (CPSs), traditional methods of machine learning implemented in network anomaly detection are inefficient. Therefore, recently developed machine learning techniques, with the emphasis on deep learning, are finding their successful implementations in the detection and classification of anomalies at both the network and host levels. This paper presents an ensemble method that leverages deep models such as the Deep Neural Network (DNN) and Long Short-Term Memory (LSTM) and a meta-classifier (i.e., logistic regression) following the principle of stacked generalization. To enhance the capabilities of the proposed approach, the method utilizes a two-step process for the apprehension of network anomalies. In the first stage, data pre-processing, a Deep Sparse AutoEncoder (DSAE) is employed for the feature engineering problem. In the second phase, a stacking ensemble learning approach is utilized for classification. The efficiency of the method disclosed in this work is tested on heterogeneous datasets, including data gathered in the IoT environment, namely IoT-23, LITNET-2020, and NetML-2020. The results of the evaluation of the proposed approach are discussed. Statistical significance is tested and compared to the state-of-the-art approaches in network anomaly detection.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Mehedi Masud ◽  
Hesham Alhumyani ◽  
Sultan S. Alshamrani ◽  
Omar Cheikhrouhou ◽  
Saleh Ibrahim ◽  
...  

Malaria is a contagious disease that affects millions of lives every year. Traditional diagnosis of malaria in laboratory requires an experienced person and careful inspection to discriminate healthy and infected red blood cells (RBCs). It is also very time-consuming and may produce inaccurate reports due to human errors. Cognitive computing and deep learning algorithms simulate human intelligence to make better human decisions in applications like sentiment analysis, speech recognition, face detection, disease detection, and prediction. Due to the advancement of cognitive computing and machine learning techniques, they are now widely used to detect and predict early disease symptoms in healthcare field. With the early prediction results, healthcare professionals can provide better decisions for patient diagnosis and treatment. Machine learning algorithms also aid the humans to process huge and complex medical datasets and then analyze them into clinical insights. This paper looks for leveraging deep learning algorithms for detecting a deadly disease, malaria, for mobile healthcare solution of patients building an effective mobile system. The objective of this paper is to show how deep learning architecture such as convolutional neural network (CNN) which can be useful in real-time malaria detection effectively and accurately from input images and to reduce manual labor with a mobile application. To this end, we evaluate the performance of a custom CNN model using a cyclical stochastic gradient descent (SGD) optimizer with an automatic learning rate finder and obtain an accuracy of 97.30% in classifying healthy and infected cell images with a high degree of precision and sensitivity. This outcome of the paper will facilitate microscopy diagnosis of malaria to a mobile application so that reliability of the treatment and lack of medical expertise can be solved.


The aim of the study is to compare, assess the optimum tools as well as the techniques and advanced features focused on prediction of diabetes diagnosis based on machine learning tactics and diabetic retinopathy using Artificial Intelligence. The literature on data science, Artificial Intelligence (AI) contains important knowledge and understanding of AI entities such as Data science, machine learning, deep learning, Medical image processing, feature extraction, classification techniques, etc. Diabetes diagnosis is a phenomenon that impacts individuals around the globe. Now, with diabetes impacting people from children to the elderly, the out-dated approaches to diabetes diagnosis should be replaced with new, time-saving technologies. There's several studies carried out by researchers to recognise and predict diabetes. Here plenty of classifiers in machine learning can be used, such as KNN, Random Tree, etc.They can save time and get more precise outcome when using these techniques to predict diabetes. Diabetic retinopathy (DR) is a typical disorder of diabetic disease that induces vision-impacting lesions in the retina. It also can turn to visual impairment if it is not addressed early. DR therapy only helps vision. Deep learning has in recent times being one of the most widely used approaches that has accomplished higher outcomes in so many fields, especially in the analysing and identification of medical image classification. In medical image processing, convolutional neural networks (CNN) using transfer learning are commonly used as a deep learning approach and they are incredibly beneficial. Key words: Diab


2021 ◽  
Author(s):  
V. N. Aditya Datta Chivukula ◽  
Sri Keshava Reddy Adupala

Machine learning techniques have become a vital part of every ongoing research in technical areas. In recent times the world has witnessed many beautiful applications of machine learning in a practical sense which amaze us in every aspect. This paper is all about whether we should always rely on deep learning techniques or is it really possible to overcome the performance of simple deep learning algorithms by simple statistical machine learning algorithms by understanding the application and processing the data so that it can help in increasing the performance of the algorithm by a notable amount. The paper mentions the importance of data pre-processing than that of the selection of the algorithm. It discusses the functions involving trigonometric, logarithmic, and exponential terms and also talks about functions that are purely trigonometric. Finally, we discuss regression analysis on music signals.


2019 ◽  
Vol 35 (24) ◽  
pp. 5235-5242 ◽  
Author(s):  
Jun Wang ◽  
Liangjiang Wang

Abstract Motivation Circular RNAs (circRNAs) are a new class of endogenous RNAs in animals and plants. During pre-RNA splicing, the 5′ and 3′ termini of exon(s) can be covalently ligated to form circRNAs through back-splicing (head-to-tail splicing). CircRNAs can be conserved across species, show tissue- and developmental stage-specific expression patterns, and may be associated with human disease. However, the mechanism of circRNA formation is still unclear although some sequence features have been shown to affect back-splicing. Results In this study, by applying the state-of-art machine learning techniques, we have developed the first deep learning model, DeepCirCode, to predict back-splicing for human circRNA formation. DeepCirCode utilizes a convolutional neural network (CNN) with nucleotide sequence as the input, and shows superior performance over conventional machine learning algorithms such as support vector machine and random forest. Relevant features learnt by DeepCirCode are represented as sequence motifs, some of which match human known motifs involved in RNA splicing, transcription or translation. Analysis of these motifs shows that their distribution in RNA sequences can be important for back-splicing. Moreover, some of the human motifs appear to be conserved in mouse and fruit fly. The findings provide new insight into the back-splicing code for circRNA formation. Availability and implementation All the datasets and source code for model construction are available at https://github.com/BioDataLearning/DeepCirCode. Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document