MultiResU-Net: Neural Network for Salt Bodies Delineation and QC Manual Interpretation

2021 ◽  
Author(s):  
Yesser HajNasser

Abstract Accurate delineation of salt bodies is essential for the characterization of hydrocarbon accumulation and seal efficiency in offshore reservoirs. The interpretation of these subsurface features is heavily dependent on visual picking. This in turn could introduce systematic bias into the task of salt body interpretation. In this study, we introduce a novel machine learning approach of a deep neural network to mimic an experienced geophysical interpreter's intellect in interpreting salt bodies. Here, the benefits of using machine learning are demonstrated by implementing the MultiResU-Net network. The network is an improved form of the classic U-Net. It presents two key architectural improvements. First, it replaces the simple convolutional layers with inception-like blocks with varying kernel sizes to reconcile the spatial features learned from different seismic image contexts. Second, it incorporates residual convolutional layers along the skip connections between the downsampling and the upsampling paths. This aims at compensating for the disparity between the lower-level features coming from the early stages of the downsampling path and the much higher-level features coming from the upsampling path. From the primary results using the TGS Salt Identification Challenge dataset, the MultiResU-Net outperformed the classic U-Net in identifying salt bodies and showed good agreement with the ground truth. Additionally, in the case of complex salt body geometries, the MultiResU-Net predictions exhibited some intriguing differences with the ground truth interpretation. Although the network validation accuracy is about 95%, some of these occasional discrepancies between the neural network predictions and the ground truth highlighted the subjectivity of the manual interpretation. Consequently, this raises the need to incorporate these neural networks that are prone to random perturbations to QC manual geophysical interpretation. To bridge the gap between the human interpretation and the machine learning predictions, we propose a closed-loop-machine-learning workflow that aims at optimizing the training dataset by incorporating both the consistency of the neural network and the intellect of an experienced geophysical interpreter.

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Idris Kharroubi ◽  
Thomas Lim ◽  
Xavier Warin

AbstractWe study the approximation of backward stochastic differential equations (BSDEs for short) with a constraint on the gains process. We first discretize the constraint by applying a so-called facelift operator at times of a grid. We show that this discretely constrained BSDE converges to the continuously constrained one as the mesh grid converges to zero. We then focus on the approximation of the discretely constrained BSDE. For that we adopt a machine learning approach. We show that the facelift can be approximated by an optimization problem over a class of neural networks under constraints on the neural network and its derivative. We then derive an algorithm converging to the discretely constrained BSDE as the number of neurons goes to infinity. We end by numerical experiments.


2020 ◽  
Author(s):  
Stephan Rasp

Abstract. Over the last couple of years, machine learning parameterizations have emerged as a potential way to improve the representation of sub-grid processes in Earth System Models (ESMs). So far, all studies were based on the same three-step approach: first a training dataset was created from a high-resolution simulation, then a machine learning algorithms was fitted to this dataset, before the trained algorithms was implemented in the ESM. The resulting online simulations were frequently plagued by instabilities and biases. Here, coupled online learning is proposed as a way to combat these issues. Coupled learning can be seen as a second training stage in which the pretrained machine learning parameterization, specifically a neural network, is run in parallel with a high-resolution simulation. The high-resolution simulation is kept in sync with the neural network-driven ESM through constant nudging. This enables the neural network to learn from the tendencies that the high-resolution simulation would produce if it experienced the states the neural network creates. The concept is illustrated using the Lorenz 96 model, where coupled learning is able to recover the "true" parameterizations. Further, detailed algorithms for the implementation of coupled learning in 3D cloud-resolving models and the super parameterization framework are presented. Finally, outstanding challenges and issues not resolved by this approach are discussed.


2020 ◽  
Vol 13 (5) ◽  
pp. 2185-2196
Author(s):  
Stephan Rasp

Abstract. Over the last couple of years, machine learning parameterizations have emerged as a potential way to improve the representation of subgrid processes in Earth system models (ESMs). So far, all studies were based on the same three-step approach: first a training dataset was created from a high-resolution simulation, then a machine learning algorithm was fitted to this dataset, before the trained algorithm was implemented in the ESM. The resulting online simulations were frequently plagued by instabilities and biases. Here, coupled online learning is proposed as a way to combat these issues. Coupled learning can be seen as a second training stage in which the pretrained machine learning parameterization, specifically a neural network, is run in parallel with a high-resolution simulation. The high-resolution simulation is kept in sync with the neural network-driven ESM through constant nudging. This enables the neural network to learn from the tendencies that the high-resolution simulation would produce if it experienced the states the neural network creates. The concept is illustrated using the Lorenz 96 model, where coupled learning is able to recover the “true” parameterizations. Further, detailed algorithms for the implementation of coupled learning in 3D cloud-resolving models and the super parameterization framework are presented. Finally, outstanding challenges and issues not resolved by this approach are discussed.


2021 ◽  
Vol 13 (5) ◽  
pp. 969
Author(s):  
Ka Lok Chan ◽  
Ehsan Khorsandi ◽  
Song Liu ◽  
Frank Baier ◽  
Pieter Valks

In this paper, we present the estimation of surface NO2 concentrations over Germany using a machine learning approach. TROPOMI satellite observations of tropospheric NO2 vertical column densities (VCDs) and several meteorological parameters are used to train the neural network model for the prediction of surface NO2 concentrations. The neural network model is validated against ground-based in situ air quality monitoring network measurements and regional chemical transport model (CTM) simulations. Neural network estimation of surface NO2 concentrations show good agreement with in situ monitor data with Pearson correlation coefficient (R) of 0.80. The results also show that the machine learning approach is performing better than regional CTM simulations in predicting surface NO2 concentrations. We also performed a sensitivity analysis for each input parameter of the neural network model. The validated neural network model is then used to estimate surface NO2 concentrations over Germany from 2018 to 2020. Estimated surface NO2 concentrations are used to investigate the spatio-temporal characteristics, such as seasonal and weekly variations of NO2 in Germany. The estimated surface NO2 concentrations provide comprehensive information of NO2 spatial distribution which is very useful for exposure estimation. We estimated the annual average NO2 exposure for 2018, 2019 and 2020 is 15.53, 15.24 and 13.27 µµg/m3, respectively. While the annual average NO2 concentration of 2018, 2019 and 2020 is only 12.79, 12.60 and 11.15 µµg/m3. In addition, we used the surface NO2 data set to investigate the impacts of the coronavirus disease 2019 (COVID-19) pandemic on ambient NO2 levels in Germany. In general, 10–30% lower surface NO2 concentrations are observed in 2020 compared to 2018 and 2019, indicating the significant impacts of a series of restriction measures to reduce the spread of the virus.


Extracting the sentiment of the text using machine learning techniques like LSTM is our area of concern. Classifying the movie reviews using LSTM is our problem statement. The reviews dataset is taken from the IMDB movie review dataset. Here we will classify a review based on the memory in the neural network of a LSTM cell state. Movie reviews often contain sensible content which describe the movie. We can manually decide whether a movie is good or bad by going through these reviews. Using machine learning approach we are classifying the movie reviews such that we can say that a movie is good or bad. LSTM is effective than many other techniques like RNN and CNN.


2021 ◽  
Vol 16 (12) ◽  
pp. P12002
Author(s):  
X.Y. Xie ◽  
H.L. Xu ◽  
Q.Y. Li ◽  
Y.J. Sun

Abstract A data-based machine learning approach is proposed to study the properties of time resolution of RPC detectors by measuring the time of flight of cosmic muons. This method utilises a multi-layer perceptron and a type of recurrent neural network called long short-term memory. The neural network is trained with the waveforms of RPC signals digitized by an oscilloscope at a sampling frequency of 10 GHz and a 2 GHz bandwidth. A data augmentation approach is implemented for labelling. Compared to the results from conventional waveform analysis, this approach achieves a better time resolution of 1-mm gap RPCs. Based on the data, the approach has a generalisation capacity for performance studies of other timing detectors.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Peter M. Maloca ◽  
Philipp L. Müller ◽  
Aaron Y. Lee ◽  
Adnan Tufail ◽  
Konstantinos Balaskas ◽  
...  

AbstractMachine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization (‘neural recording’). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications.


2020 ◽  
Vol 10 (18) ◽  
pp. 6417 ◽  
Author(s):  
Emanuele Lattanzi ◽  
Giacomo Castellucci ◽  
Valerio Freschi

Most road accidents occur due to human fatigue, inattention, or drowsiness. Recently, machine learning technology has been successfully applied to identifying driving styles and recognizing unsafe behaviors starting from in-vehicle sensors signals such as vehicle and engine speed, throttle position, and engine load. In this work, we investigated the fusion of different external sensors, such as a gyroscope and a magnetometer, with in-vehicle sensors, to increase machine learning identification of unsafe driver behavior. Starting from those signals, we computed a set of features capable to accurately describe the behavior of the driver. A support vector machine and an artificial neural network were then trained and tested using several features calculated over more than 200 km of travel. The ground truth used to evaluate classification performances was obtained by means of an objective methodology based on the relationship between speed, and lateral and longitudinal acceleration of the vehicle. The classification results showed an average accuracy of about 88% using the SVM classifier and of about 90% using the neural network demonstrating the potential capability of the proposed methodology to identify unsafe driver behaviors.


2013 ◽  
Vol 7 (3) ◽  
pp. 646-653
Author(s):  
Anshul Chaturvedi ◽  
Prof. Vineet Richharia

The Internet, computer networks and information are vital resources of current information trend and their protection has increased importance in current existence. Any attempt, successful or unsuccessful to finding the middle ground the discretion, truthfulness and accessibility of any information resource or the information itself is measured a security attack or an intrusion. Intrusion compromised a loose of information credential and trust of security concern. The mechanism of intrusion detection faced a problem of new generated schema and pattern of attack data. Various authors and researchers proposed a method for intrusion detection based on machine learning approach and neural network approach all these compromised with new pattern and schema. Now in this paper a new model of intrusion detection based on SARAS reinforced learning scheme and RBF neural network has proposed. SARAS method imposed a state of attack behaviour and RBF neural network process for training pattern for new schema. Our empirical result shows that the proposed model is better in compression of SARSA and other machine learning technique.


2021 ◽  
Author(s):  
Marco Luca Sbodio ◽  
Natasha Mulligan ◽  
Stefanie Speichert ◽  
Vanessa Lopez ◽  
Joao Bettencourt-Silva

There is a growing trend in building deep learning patient representations from health records to obtain a comprehensive view of a patient’s data for machine learning tasks. This paper proposes a reproducible approach to generate patient pathways from health records and to transform them into a machine-processable image-like structure useful for deep learning tasks. Based on this approach, we generated over a million pathways from FAIR synthetic health records and used them to train a convolutional neural network. Our initial experiments show the accuracy of the CNN on a prediction task is comparable or better than other autoencoders trained on the same data, while requiring significantly less computational resources for training. We also assess the impact of the size of the training dataset on autoencoders performances. The source code for generating pathways from health records is provided as open source.


Sign in / Sign up

Export Citation Format

Share Document