Lithology identification from well log curves via neural networks with additional geological constraint

Geophysics ◽  
2021 ◽  
pp. 1-77
Author(s):  
Chunbi Jiang ◽  
Dongxiao Zhang ◽  
Shifeng Chen

We propose a machine learning framework to solve the lithology classification problem from well log curves by incorporating an additional geological constraint. The constraint is a stratigraphic unit, and we use it as an dditional feature. This method demonstrates the possibility of solving the lithology identification problem from a multi-scale data source because stratigraphic unit information can be obtained through tying well logs to seismic data. Our experiments show that adding an additional geological constraint improves the performance of models significantly. Currently, most researchers use their own well log curves to solve the lithology classification problem. The well log data used in our experiment, which are from the North Sea area, are publicly available, and thus future studies can continue to utilize them to perform further comparisons.We evaluated different types of recurrent neural networks, i.e., bidirectional long shortterm memory (Bi-LSTM), bidirectional gated recurrent unit (Bi-GRU), GRU-based encoderdecoder architecture with attention (ABi-GRU), one-dimensional convolutional networks, i.e., temporal convolutional network (TCN) and multi-scale residual network (MsRNet), and multi-layer perceptron (MLP) on the task. Our experiments revealed that the overall performance of RNN-based networks is better and more consistent. Since our experiments are based on one single dataset, additional experiments are required in the future to better elucidate how each network works on the lithofacies classification problem.

2019 ◽  
Vol 9 (10) ◽  
pp. 2042 ◽  
Author(s):  
Rachida Tobji ◽  
Wu Di ◽  
Naeem Ayoub

In Deep Learning, recent works show that neural networks have a high potential in the field of biometric security. The advantage of using this type of architecture, in addition to being robust, is that the network learns the characteristic vectors by creating intelligent filters in an automatic way, grace to the layers of convolution. In this paper, we propose an algorithm “FMnet” for iris recognition by using Fully Convolutional Network (FCN) and Multi-scale Convolutional Neural Network (MCNN). By taking into considerations the property of Convolutional Neural Networks to learn and work at different resolutions, our proposed iris recognition method overcomes the existing issues in the classical methods which only use handcrafted features extraction, by performing features extraction and classification together. Our proposed algorithm shows better classification results as compared to the other state-of-the-art iris recognition approaches.


ScienceRise ◽  
2020 ◽  
pp. 10-16
Author(s):  
Svitlana Shapovalova ◽  
Yurii Moskalenko

Object of research: basic architectures of deep learning neural networks. Investigated problem: insufficient accuracy of solving the classification problem based on the basic architectures of deep learning neural networks. An increase in accuracy requires a significant complication of the architecture, which, in turn, leads to an increase in the required computing resources, as well as the consumption of video memory and the cost of learning/output time. Therefore, the problem arises of determining such methods for modifying basic architectures that improve the classification accuracy and require insignificant additional computing resources. Main scientific results: based on the analysis of existing methods for improving the classification accuracy on the convolutional networks of basic architectures, it is determined what is most effective: scaling the ScanNet architecture, learning the ensemble of TreeNet models, integrating several CBNet backbone networks. For computational experiments, these modifications of the basic architectures are implemented, as well as their combinations: ScanNet + TreeNet, ScanNet + CBNet. The effectiveness of these methods in comparison with basic architectures has been proven when solving the problem of recognizing malignant tumors with diagnostic images – SIIM-ISIC Melanoma Classification, the train/test set of which is presented on the Kaggle platform. The accuracy value for the area under the ROC curve metric has increased from 0.94489 (basic architecture network) to 0.96317 (network with ScanNet + CBNet modifications). At the same time, the output compared to the basic architecture (EfficientNet-b5) increased from 440 to 490 seconds, and the consumption of video memory increased from 8 to 9.2 gigabytes, which is acceptable. Innovative technological product: methods for achieving high recognition accuracy from a diagnostic signal based on deep learning neural networks of basic architectures. Scope of application of the innovative technological product: automatic diagnostics systems in the following areas: medicine, seismology, astronomy (classification by images) onboard control systems and systems for monitoring transport and vehicle flows or visitors (recognition of scenes with camera frames).


2020 ◽  
Vol 34 (07) ◽  
pp. 10770-10777 ◽  
Author(s):  
Yuchen Fan ◽  
Jiahui Yu ◽  
Ding Liu ◽  
Thomas S. Huang

While scale-invariant modeling has substantially boosted the performance of visual recognition tasks, it remains largely under-explored in deep networks based image restoration. Naively applying those scale-invariant techniques (e.g., multi-scale testing, random-scale data augmentation) to image restoration tasks usually leads to inferior performance. In this paper, we show that properly modeling scale-invariance into neural networks can bring significant benefits to image restoration performance. Inspired from spatial-wise convolution for shift-invariance, “scale-wise convolution” is proposed to convolve across multiple scales for scale-invariance. In our scale-wise convolutional network (SCN), we first map the input image to the feature space and then build a feature pyramid representation via bi-linear down-scaling progressively. The feature pyramid is then passed to a residual network with scale-wise convolutions. The proposed scale-wise convolution learns to dynamically activate and aggregate features from different input scales in each residual building block, in order to exploit contextual information on multiple scales. In experiments, we compare the restoration accuracy and parameter efficiency among our model and many different variants of multi-scale neural networks. The proposed network with scale-wise convolution achieves superior performance in multiple image restoration tasks including image super-resolution, image denoising and image compression artifacts removal. Code and models are available at: https://github.com/ychfan/scn_sr.


Author(s):  
Elshan Mustafayev ◽  
Rustam Azimov

Introduction. The implementation of information technologies in various spheres of public life dictates the creation of efficient and productive systems for entering information into computer systems. In such systems it is important to build an effective recognition module. At the moment, the most effective method for solving this problem is the use of artificial multilayer neural and convolutional networks. The purpose of the paper. This paper is devoted to a comparative analysis of the recognition results of handwritten characters of the Azerbaijani alphabet using neural and convolutional neural networks. Results. The analysis of the dependence of the recognition results on the following parameters is carried out: the architecture of neural networks, the size of the training base, the choice of the subsampling algorithm, the use of the feature extraction algorithm. To increase the training sample, the image augmentation technique was used. Based on the real base of 14000 characters, the bases of 28000, 42000 and 72000 characters were formed. The description of the feature extraction algorithm is given. Conclusions. Analysis of recognition results on the test sample showed: as expected, convolutional neural networks showed higher results than multilayer neural networks; the classical convolutional network LeNet-5 showed the highest results among all types of neural networks. However, the multi-layer 3-layer network, which was input by the feature extraction results; showed rather high results comparable with convolutional networks; there is no definite advantage in the choice of the method in the subsampling layer. The choice of the subsampling method (max-pooling or average-pooling) for a particular model can be selected experimentally; increasing the training database for this task did not give a tangible improvement in recognition results for convolutional networks and networks with preliminary feature extraction. However, for networks learning without feature extraction, an increase in the size of the database led to a noticeable improvement in performance. Keywords: neural networks, feature extraction, OCR.


2021 ◽  
Author(s):  
Danh Bui-Thi ◽  
Emmanuel Rivière ◽  
Pieter Meysman ◽  
Kris Laukens

AbstractMotivationConvolutional neural networks have enabled unprecedented breakthroughs in a variety of computer vision tasks. They have also drawn much attention from other domains, including drug discovery and drug development. In this study, we develop a computational method based on convolutional neural networks to tackle a fundamental question in drug discovery and development, i.e. the prediction of compound-protein interactions based on compound structure and protein sequence. We propose a hierarchical graph convolutional network (HGCN) to encode small molecules. The HGCN aggregates a molecule embedding from substructure embeddings, which are synthesized from atom embeddings. As small molecules usually share substructures, computing a molecule embedding from those common substructures allows us to learn better generic models. We then combined the HGCN with a one-dimensional convolutional network to construct a complete model for predicting compound-protein interactions. Furthermore we apply an explanation technique, Grad-CAM, to visualize the contribution of each amino acid into the prediction.ResultsExperiments using different datasets show the improvement of our model compared to other GCN-based methods and a sequence based method, DeepDTA, in predicting compound-protein interactions. Each prediction made by the model is also explainable and can be used to identify critical residues mediating the interaction.Availability and implementationhttps://github.com/banhdzui/cpi_hgcn.git


2020 ◽  
Vol 34 (05) ◽  
pp. 8409-8416
Author(s):  
Xien Liu ◽  
Xinxin You ◽  
Xiao Zhang ◽  
Ji Wu ◽  
Ping Lv

Compared to sequential learning models, graph-based neural networks exhibit some excellent properties, such as ability capturing global information. In this paper, we investigate graph-based neural networks for text classification problem. A new framework TensorGCN (tensor graph convolutional networks), is presented for this task. A text graph tensor is firstly constructed to describe semantic, syntactic, and sequential contextual information. Then, two kinds of propagation learning perform on the text graph tensor. The first is intra-graph propagation used for aggregating information from neighborhood nodes in a single graph. The second is inter-graph propagation used for harmonizing heterogeneous information between graphs. Extensive experiments are conducted on benchmark datasets, and the results illustrate the effectiveness of our proposed framework. Our proposed TensorGCN presents an effective way to harmonize and integrate heterogeneous information from different kinds of graphs.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Si Zhang ◽  
Hanghang Tong ◽  
Jiejun Xu ◽  
Ross Maciejewski

Abstract Graphs naturally appear in numerous application domains, ranging from social analysis, bioinformatics to computer vision. The unique capability of graphs enables capturing the structural relations among data, and thus allows to harvest more insights compared to analyzing data in isolation. However, it is often very challenging to solve the learning problems on graphs, because (1) many types of data are not originally structured as graphs, such as images and text data, and (2) for graph-structured data, the underlying connectivity patterns are often complex and diverse. On the other hand, the representation learning has achieved great successes in many areas. Thereby, a potential solution is to learn the representation of graphs in a low-dimensional Euclidean space, such that the graph properties can be preserved. Although tremendous efforts have been made to address the graph representation learning problem, many of them still suffer from their shallow learning mechanisms. Deep learning models on graphs (e.g., graph neural networks) have recently emerged in machine learning and other related areas, and demonstrated the superior performance in various problems. In this survey, despite numerous types of graph neural networks, we conduct a comprehensive review specifically on the emerging field of graph convolutional networks, which is one of the most prominent graph deep learning models. First, we group the existing graph convolutional network models into two categories based on the types of convolutions and highlight some graph convolutional network models in details. Then, we categorize different graph convolutional networks according to the areas of their applications. Finally, we present several open challenges in this area and discuss potential directions for future research.


2021 ◽  
Vol 13 (7) ◽  
pp. 1404
Author(s):  
Hongying Liu ◽  
Derong Xu ◽  
Tianwen Zhu ◽  
Fanhua Shang ◽  
Yuanyuan Liu ◽  
...  

Classification of polarimetric synthetic aperture radar (PolSAR) images has achieved good results due to the excellent fitting ability of neural networks with a large number of training samples. However, the performance of most convolutional neural networks (CNNs) degrades dramatically when only a few labeled training samples are available. As one well-known class of semi-supervised learning methods, graph convolutional networks (GCNs) have gained much attention recently to address the classification problem with only a few labeled samples. As the number of layers grows in the network, the parameters dramatically increase. It is challenging to determine an optimal architecture manually. In this paper, we propose a neural architecture search method based GCN (ASGCN) for the classification of PolSAR images. We construct a novel graph whose nodes combines both the physical features and spatial relations between pixels or samples to represent the image. Then we build a new searching space whose components are empirically selected from some graph neural networks for architecture search and develop the differentiable architecture search method to construction our ASGCN. Moreover, to address the training of large-scale images, we present a new weighted mini-batch algorithm to reduce the computing memory consumption and ensure the balance of sample distribution, and also analyze and compare with other similar training strategies. Experiments on several real-world PolSAR datasets show that our method has improved the overall accuracy as much as 3.76% than state-of-the-art methods.


2020 ◽  
Vol 34 (04) ◽  
pp. 5363-5370 ◽  
Author(s):  
Aldo Pareja ◽  
Giacomo Domeniconi ◽  
Jie Chen ◽  
Tengfei Ma ◽  
Toyotaro Suzumura ◽  
...  

Graph representation learning resurges as a trending research subject owing to the widespread use of deep learning for Euclidean data, which inspire various creative designs of neural networks in the non-Euclidean domain, particularly graphs. With the success of these graph neural networks (GNN) in the static setting, we approach further practical scenarios where the graph dynamically evolves. Existing approaches typically resort to node embeddings and use a recurrent neural network (RNN, broadly speaking) to regulate the embeddings and learn the temporal dynamics. These methods require the knowledge of a node in the full time span (including both training and testing) and are less applicable to the frequent change of the node set. In some extreme scenarios, the node sets at different time steps may completely differ. To resolve this challenge, we propose EvolveGCN, which adapts the graph convolutional network (GCN) model along the temporal dimension without resorting to node embeddings. The proposed approach captures the dynamism of the graph sequence through using an RNN to evolve the GCN parameters. Two architectures are considered for the parameter evolution. We evaluate the proposed approach on tasks including link prediction, edge classification, and node classification. The experimental results indicate a generally higher performance of EvolveGCN compared with related approaches. The code is available at https://github.com/IBM/EvolveGCN.


Author(s):  
K. Jairam Naik ◽  
Annukriti Soni

Since video includes both temporal and spatial features, it has become a fascinating classification problem. Each frame within a video holds important information called spatial information, as does the context of that frame relative to the frames before it in time called temporal information. Several methods have been invented for video classification, but each one is suffering from its own drawback. One of such method is called convolutional neural networks (CNN) model. It is a category of deep learning neural network model that can turn directly on the underdone inputs. However, such models are recently limited to handling two-dimensional inputs only. This chapter implements a three-dimensional convolutional neural networks (CNN) model for video classification to analyse the classification accuracy gained using the 3D CNN model. The 3D convolutional networks are preferred for video classification since they inherently apply convolutions in the 3D space.


Sign in / Sign up

Export Citation Format

Share Document