Localization of Thermal Wellbore Defects Using Machine Learning

2022 ◽  
pp. 1-13
Author(s):  
Kathryn Bruss ◽  
Raymond Kim ◽  
Taylor A. Myers ◽  
Jiann-cherng Su ◽  
Anirban Mazumdar

Abstract Defect detection and localization are key to preventing environmentally damaging wellbore leakages in both geothermal and oil/gas applications. In this work, a multi-step, machine learning approach is used to localize two types of thermal defects within a wellbore model. This approach includes a COMSOL heat transfer simulation to generate base data, a neural network to classify defect orientations, and a localization algorithm to synthesize sensor estimations into a predicted location. A small-scale physical wellbore test bed was created to verify the approach using experimental data. The classification and localization results were quantified using this experimental data. The classification predicted all experimental defect orientations correctly. The localization algorithm predicted the defect location with an average root mean square error of 1.49 in. The core contributions of this work are 1) the overall localization architecture, 2) the use of centroid-guided mean-shift clustering for localization, and 3) the experimental validation and quantification of performance.

2018 ◽  
Vol 35 (3) ◽  
pp. 523-540 ◽  
Author(s):  
Conor McNicholas ◽  
Clifford F. Mass

AbstractOver half a billion smartphones worldwide are now capable of measuring atmospheric pressure, providing a pressure network of unprecedented density and coverage. This paper describes novel approaches for the collection, quality control, and bias correction of such smartphone pressures. An Android app was developed and distributed to several thousand users, serving as a test bed for onboard pressure collection and quality-control strategies. New methods of pressure collection were evaluated, with a focus on reducing and quantifying sources of observation error and uncertainty. Using a machine learning approach, complex relationships between pressure bias and ancillary sensor data were used to predict and correct future pressure biases over a 4-week period from 10 November to 5 December 2016. This approach, in combination with simple quality-control checks, produced an 82% reduction in the average smartphone pressure bias, substantially improving the quality of smartphone pressures and facilitating their use in numerical weather prediction.


2021 ◽  
Author(s):  
Michael Kilgour ◽  
Lena Simine

<p>We have recently demonstrated an effective protocol for the simulation of amorphous molecular configurations using the PixelCNN generative model (J. Phys. Chem. Lett. 2020, 11, 20, 8532). The morphological sampling of amorphous materials via such an autoregressive generation protocol sidesteps the high computational costs associated with simulating amorphous materials at scale, enabling practically unlimited structural sampling based on only small-scale experimental or computational training samples. An important question raised but not rigorously addressed in that report was whether this machine learning approach could be considered a physical simulation in the conventional sense. Here we answer this question by detailing the inner workings of the underlying algorithm that we refer to as the Morphological Autoregression Protocol or MAP. <br></p>


2017 ◽  
Vol 17 (17) ◽  
pp. 10855-10864 ◽  
Author(s):  
Sarvesh Garimella ◽  
Daniel A. Rothenberg ◽  
Martin J. Wolf ◽  
Robert O. David ◽  
Zamin A. Kanji ◽  
...  

Abstract. This study investigates the measurement of ice nucleating particle (INP) concentrations and sizing of crystals using continuous flow diffusion chambers (CFDCs). CFDCs have been deployed for decades to measure the formation of INPs under controlled humidity and temperature conditions in laboratory studies and by ambient aerosol populations. These measurements have, in turn, been used to construct parameterizations for use in models by relating the formation of ice crystals to state variables such as temperature and humidity as well as aerosol particle properties such as composition and number. We show here that assumptions of ideal instrument behavior are not supported by measurements made with a commercially available CFDC, the SPectrometer for Ice Nucleation (SPIN), and the instrument on which it is based, the Zurich Ice Nucleation Chamber (ZINC). Non-ideal instrument behavior, which is likely inherent to varying degrees in all CFDCs, is caused by exposure of particles to different humidities and/or temperatures than predicated from instrument theory of operation. This can result in a systematic, and variable, underestimation of reported INP concentrations. We find here variable correction factors from 1.5 to 9.5, consistent with previous literature values. We use a machine learning approach to show that non-ideality is most likely due to small-scale flow features where the aerosols are combined with sheath flows. Machine learning is also used to minimize the uncertainty in measured INP concentrations. We suggest that detailed measurement, on an instrument-by-instrument basis, be performed to characterize this uncertainty.


2021 ◽  
Author(s):  
Timothy I. Anderson ◽  
Yunan Li ◽  
Anthony R. Kovscek

Abstract Heavy oil resources are becoming increasingly important for the global oil supply, and consequently there has been renewed interest in techniques for extracting heavy oil. Among these, in-situ combustion (ISC) has tremendous potential for late-stage heavy oil fields, as well as high viscosity, very deep, or other unconventional reservoirs. A critical step in evaluating the use of ISC in a potential project is developing an accurate chemical reaction model to employ for larger-scale simulations. Such models can be difficult to calibrate, however, that in turn can lead to large errors in upscaled simulations. Data-driven models of ISC kinetics overcome these issues by foregoing the calibration step and predicting kinetics directly from laboratory data. In this work, we introduce the Non-Arrhenius Machine Learning Approach (NAMLA). NAMLA is a machine learning-based method for predicting O2 consumption in heavy oil combustion directly from ramped temperature oxidation (RTO) experimental data. Our model treats the O2 consumption as a function of only temperature and total O2 conversion and uses a locally-weighted linear regression model to predict the conversion rate at a query point. We apply this method to simulated and experimental data from heavy oil samples and compare its ability to predict O2 consumption curves with a previously proposed interpolation-based method. Results show that the presented method has better performance than previously proposed interpolation models when the available experimental data is very sparse or the query point lies outside the range of RTO experiments in the dataset. When available data is sufficiently dense or the query point is within the range of RTO curves in the training set, then linear interpolation has comparable or better accuracy than the proposed method. The biggest advantage of the proposed method is that it is able to compute confidence intervals for experimentally measured or estimated O2 consumption curves. We believe that future methods will be able to use the efficiency and accuracy of interpolation-based methods with the statistical properties of the proposed machine learning approach to better characterize and predict heavy oil combustion.


2021 ◽  
Author(s):  
Sven Degroeve ◽  
Ralf Gabriels ◽  
Kevin Velghe ◽  
Robbin Bouwmeester ◽  
Natalia Tichshenko ◽  
...  

Abstract Mass spectrometry-based proteomics generates vast amounts of signal data that require computational interpretation to obtain peptide identifications. Dozens of algorithms for this task exist, but all exploit only part of the acquired data to judge a peptide-to-spectrum match (PSM), ignoring important information such as the observed retention time and fragment ion peak intensity pattern. Moreover, only few identification algorithms allow open modification searches that can substantially increase peptide identifications. We here therefore introduce ionbot, a novel open modification search engine that is the first to fully merge machine learning with peptide identification. This core innovation brings the ability to include a much larger range of experimental data into PSM scoring, and even to adapt this scoring to the specifics of the data itself. As a result, ionbot substantially increases PSM confidence for open searches, and even enables a further increase in peptide identification rate of up to 30% by also considering highly plausible, lower-ranked, co-eluting matches for a fragmentation spectrum. Moreover, the exclusive use of machine learning for scoring also means that any future improvements to predictive models for peptide behavior will also result in more sensitive and accurate peptide identification.


2021 ◽  
Author(s):  
Michael Kilgour ◽  
Lena Simine

<p>We have recently demonstrated an effective protocol for the simulation of amorphous molecular configurations using the PixelCNN generative model (J. Phys. Chem. Lett. 2020, 11, 20, 8532). The morphological sampling of amorphous materials via such an autoregressive generation protocol sidesteps the high computational costs associated with simulating amorphous materials at scale, enabling practically unlimited structural sampling based on only small-scale experimental or computational training samples. An important question raised but not rigorously addressed in that report was whether this machine learning approach could be considered a physical simulation in the conventional sense. Here we answer this question by detailing the inner workings of the underlying algorithm that we refer to as the Morphological Autoregression Protocol or MAP. <br></p>


Sign in / Sign up

Export Citation Format

Share Document