Using Graph Neural Networks for 3-D Structural Geological Modelling

Author(s):  
Michael Hillier ◽  
Florian Wellmann ◽  
Boyan Brodaric ◽  
Eric de Kemp ◽  
Ernst Schetselaar

<p>A new approach for constrained 3-D structural geological modelling using Graph Neural Networks (GNN) has been developed that is driven by a learning through training paradigm. Graph neural networks are an emerging deep learning model for graph structured data which can produce vector embeddings of graph elements including nodes, edges, and graphs themselves, useful for various learning objectives. In this work our graphs represent unstructured volumetric meshes. Our developed GNN architecture can generate spatially interpolated implicit scalar fields and discrete geological unit predictions on graph nodes (e.g. mesh vertices) to construct 3-D structural models. Interpolations are constrained by scattered point data sampling geological units, interfaces, as well as linear and planar orientation measurements. Interpolation constraints are incorporated into the neural architecture using loss functions associated with each constraint type that measure the error between the network’s predictions and data observations. This presentation will describe key concepts involved within this approach including vector embeddings, spatial-based convolutions on graphs, and loss functions for structural geological features. In addition, several modelling results will be given that demonstrate the capabilities and potential of GNNs for representing geological structures.</p>

Author(s):  
Michael Hillier ◽  
Florian Wellmann ◽  
Boyan Brodaric ◽  
Eric de Kemp ◽  
Ernst Schetselaar

AbstractThree-dimensional structural geomodels are increasingly being used for a wide variety of scientific and societal purposes. Most advanced methods for generating these models are implicit approaches, but they suffer limitations in the types of interpolation constraints permitted, which can lead to poor modeling in structurally complex settings. A geometric deep learning approach, using graph neural networks, is presented in this paper as an alternative to classical implicit interpolation that is driven by a learning through training paradigm. The graph neural network approach consists of a developed architecture utilizing unstructured meshes as graphs on which coupled implicit and discrete geological unit modeling is performed, with the latter treated as a classification problem. The architecture generates three-dimensional structural models constrained by scattered point data, sampling geological units and interfaces as well as planar and linear orientations. The modeling capacity of the architecture for representing geological structures is demonstrated from its application on two diverse case studies. The benefits of the approach are (1) its ability to provide an expressive framework for incorporating interpolation constraints using loss functions and (2) its capacity to deal with both continuous and discrete properties simultaneously. Furthermore, a framework is established for future research for which additional geological constraints can be integrated into the modeling process.


2021 ◽  
Author(s):  
Sayan Nag

Self-supervised learning and pre-training strategies have developed over the last few years especially for Convolutional Neural Networks (CNNs). Recently application of such methods can also be noticed for Graph Neural Networks (GNNs). In this paper, we have used a graph based self-supervised learning strategy with different loss functions (Barlow Twins[? ], HSIC[? ], VICReg[? ]) which have shown promising results when applied with CNNs previously. We have also proposed a hybrid loss function combining the advantages of VICReg and HSIC and called it as VICRegHSIC. The performance of these aforementioned methods have been compared when applied to two different datasets namely MUTAG and PROTEINS. Moreover, the impact of different batch sizes, projector dimensions and data augmentation strategies have also been explored. The results are preliminary and we will be continuing to explore with other datasets.


2021 ◽  
Author(s):  
Gabriel Jonas Duarte ◽  
Tamara Arruda Pereira ◽  
Erik Jhones Nascimento ◽  
Diego Mesquita ◽  
Amauri Holanda Souza Junior

Graph neural networks (GNNs) have become the de facto approach for supervised learning on graph data.To train these networks, most practitioners employ the categorical cross-entropy (CE) loss. We can attribute this largely to the probabilistic interpretability of models trained using CE, since it corresponds to the negative log of the categorical/softmax likelihood.We can attribute this largely to the probabilistic interpretation of CE, since it corresponds to the negative log of the categorical/softmax likelihood.Nonetheless, recent works have shown that deep learning models can benefit from adopting other loss functions. For instance, neural networks trained with symmetric losses (e.g., mean absolute error) are robust to label noise. Nonetheless, loss functions are a modeling choice and other training criteria can be employed — e.g., hinge loss and mean absolute error (MAE). Perhaps surprisingly, the effect of using different losses on GNNs has not been explored. In this preliminary work, we gauge the impact of different loss functions to the performance of GNNs for node classification under i) noisy labels and ii) different sample sizes. In contrast to findings on Euclidean domains, our results for GNNs show that there is no significant difference between models trained with CE and other classical loss functions on both aforementioned scenarios.


2020 ◽  
Author(s):  
Artur Schweidtmann ◽  
Jan Rittig ◽  
Andrea König ◽  
Martin Grohe ◽  
Alexander Mitsos ◽  
...  

<div>Prediction of combustion-related properties of (oxygenated) hydrocarbons is an important and challenging task for which quantitative structure-property relationship (QSPR) models are frequently employed. Recently, a machine learning method, graph neural networks (GNNs), has shown promising results for the prediction of structure-property relationships. GNNs utilize a graph representation of molecules, where atoms correspond to nodes and bonds to edges containing information about the molecular structure. More specifically, GNNs learn physico-chemical properties as a function of the molecular graph in a supervised learning setup using a backpropagation algorithm. This end-to-end learning approach eliminates the need for selection of molecular descriptors or structural groups, as it learns optimal fingerprints through graph convolutions and maps the fingerprints to the physico-chemical properties by deep learning. We develop GNN models for predicting three fuel ignition quality indicators, i.e., the derived cetane number (DCN), the research octane number (RON), and the motor octane number (MON), of oxygenated and non-oxygenated hydrocarbons. In light of limited experimental data in the order of hundreds, we propose a combination of multi-task learning, transfer learning, and ensemble learning. The results show competitive performance of the proposed GNN approach compared to state-of-the-art QSPR models making it a promising field for future research. The prediction tool is available via a web front-end at www.avt.rwth-aachen.de/gnn.</div>


2020 ◽  
Author(s):  
Zheng Lian ◽  
Jianhua Tao ◽  
Bin Liu ◽  
Jian Huang ◽  
Zhanlei Yang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document