Use of Machine Learning Production Driver Cross-Sections for Regional Geologic Insights in the Bakken-Three Forks Play

Author(s):  
T. Cross ◽  
K. Sathaye ◽  
J. Chaplin
2021 ◽  
Vol 247 ◽  
pp. 02039
Author(s):  
LI Zeguang ◽  
Jun Sun ◽  
Chunlin Wei ◽  
Zhe Sui ◽  
Xiaoye Qian

With the increasing needs of accurate simulation, the 3-D diffusion reactor physics module has been implemented in HTGR’s engineering simulator to give better neutron dynamics results instead of point kinetics model used in previous nuclear power plant simulators. As the requirement of real-time calculation of nuclear power plant simulator, the cross-sections used in 3-D diffusion module must be calculated very efficiently. Normally, each cross-section in simulator is calculated in the form of polynomial by function of several concerned variables, the expression of which was finalized by multivariate regression from large number scattered database generated by previous calculation. Since the polynomial is explicit and prepared in advance, the cross-sections could be calculated quickly enough in running simulator and achieve acceptable accuracy especially in LWR simulations. However, some of concerned variables in HTGR are in large scope and also the relationships of these variables are non-linear and very complex, it is very hard to use polynomial to meet full range accuracy. In this paper, a cross-section generating method used in HTGR simulator is proposed, which is based on machine learning methods, especially deep neuron network and tree regression methods. This method first uses deep neuron networks to consider the nonlinear relationships between different variables and then uses a tree regression to achieve accurate cross-section results in full range, the parameters of deep neuron networks and tree regression are learned automatically from the scattered database generated by VSOP. With the numerical tests, the proposed cross-section generating method could get more accurate cross-section results and the calculation time is acceptable by the simulator.


2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
M Kolossvary ◽  
J Karady ◽  
Y Kikuchi ◽  
A Ivanov ◽  
C L Schlett ◽  
...  

Abstract Background Currently used coronary CT angiography (CTA) plaque classification and histogram-based methods have limited accuracy to identify advanced atherosclerotic lesions. Radiomics-based machine learning (ML) could provide a more robust tool to identify high-risk plaques. Purpose Our objective was to compare the diagnostic performance of radiomics-based ML against histogram-based methods and visual assessment of ex-vivo coronary CTA cross-sections to identify advanced atherosclerotic lesions as defined by histology. Methods Overall, 21 coronaries of seven hearts were imaged ex vivo with coronary CTA. From 95 coronary plaques 611 histological cross-sections were obtained and classified based-on the modified American Heart Association scheme. Histology cross-sections were considered advanced atherosclerotic lesions if early, late fibroatheroma or thin-cap atheroma was present. Corresponding coronary CTA cross-section were co-registered and classified into homogenous, heterogeneous, napkin-ring sign plaques based on plaque attenuation pattern. Area of low attenuation (<30HU) and average CT number was quantified. In total, 1919 radiomic parameters describing the spatial complexity and heterogeneity of the lesions were calculated in each coronary CTA cross-section. Eight different radiomics-based ML models were trained on randomly selected cross-sections (training set: 75% of the cross-sections) to identify advanced atherosclerotic lesions. Plaque attenuation pattern, histogram-based methods and the best ML model were compared on the remaining 25% of the data (test-set) using area under the receiver operating characteristic curves (AUC) to identify advanced atherosclerotic lesions using histology as a reference. Results After excluding sections with heavy calcium (n=32) and no visible atherosclerotic plaque on CTA (n=134), we analyzed 445 cross-sections. Based on visual assessment, 46.5% of the cross-sections were homogeneous (207/445), 44.9% heterogeneous (200/445) and 8.6% were with napkin-ring sign (38/445). Radiomics-based ML model incorporating 13 parameters significantly outperformed visual assessment, area of low attenuation and average CT number to identify advanced lesions (AUC: 0.73 vs. 0.65 vs. 0.55 vs. 0.53; respectively; p<0.05 for all). Conclusions Radiomics-based ML analysis may be able to improve the discriminatory power of CTA to identify high-risk atherosclerotic lesions.


2020 ◽  
Author(s):  
Bao-Xin Xue ◽  
Mario Barbatti ◽  
Pavlo O. Dral

We present a machine learning (ML) method to accelerate the nuclear ensemble approach (NEA) for computing absorption cross sections. ML-NEA is used to calculate cross sections on vast ensembles of nuclear geometries to reduce the error due to insufficient statistical sampling. The electronic properties — excitation energies and oscillator strengths — are calculated with a reference electronic structure method only for relatively few points in the ensemble. Kernel-ridge-regression-based ML combined with the RE descriptor as implemented in MLatom is used to predict these properties for the remaining tens of thousands of points in the ensemble without incurring much of additional computational cost. We demonstrate for two examples, benzene and a 9-dicyanomethylene derivative of acridine, that ML-NEA can produce statistically converged cross sections even for very challenging cases and even with as few as several hundreds of training points.


2021 ◽  
Author(s):  
Guang An Ooi ◽  
Mehmet Burak Özakin ◽  
Tarek Mahmoud Mostafa ◽  
Hakan Bagci ◽  
Shehab Ahmed ◽  
...  

Abstract In the wake of today's industrial revolution, many advanced technologies and techniques have been developed to address the complex challenges in well integrity evaluation. One of the most prominent innovations is the integration of physics-based data science for robust downhole measurements. This paper introduces a promising breakthrough in electromagnetism-based corrosion imaging using physics informed machine learning (PIML), tested and validated on the cross-sections of real metal casings/tubing with defects of various sizes, locations, and spacing. Unlike existing electromagnetism-based inspection tools, where only circumferential average metal thickness is measured, this research investigates the artificial intelligence (AI)-assisted interpretation of a unique arrangement of electromagnetic (EM) sensors. This facilitates the development of a novel solution for through-tubing corrosion imaging that enhances defect detection with pixel-level accuracy. The developed framework incorporates a finite-difference time-domain (FDTD)-based EM forward solver and an artificial neural network (ANN), namely the long short-term memory recurrent neural network (LSTM-RNN). The ANN is trained using the results generated from the FDTD solver, which simulates sensor readings for different scenarios of defects. The integration of the array EM-sensor responses and an ANN enabled generalizable and accurate measurements of metal loss percentage across various experimental defects. It also enabled the precise predictions of the defects’ aperture sizes, numbers, and locations in 360-degree coverage. Results were plotted in customized 2D heat-maps for any desired cross-section of the test casings. Further analysis of different techniques demonstrated that the LSTM-RNN could show higher precision and robustness compared to regular dense NNs, especially in the case of multiple defects. The LSTM-RNN is validated using additional data from simulated and experimental data. The results show reliable predictions even with limited training data. The model accurately predicted defects of larger and smaller sizes that were intentionally excluded from the training data to demonstrate generalizability. This highlights a major advance toward corrosion imaging behind tubing. This novel technique paves the way for the use of similar concepts on other sensors in multiple barriers imaging. Further work includes improvement to the sensor package and ANNs by adding a third dimension to the imaging capabilities to produce 3D images of defects on casings.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4291
Author(s):  
Homa Arab ◽  
Iman Ghaffari ◽  
Lydia Chioukh ◽  
Serioja Tatu ◽  
Steven Dufour

A target’s movements and radar cross sections are the key parameters to consider when designing a radar sensor for a given application. This paper shows the feasibility and effectiveness of using 24 GHz radar built-in low-noise microwave amplifiers for detecting an object. For this purpose a supervised machine learning model (SVM) is trained using the recorded data to classify the targets based on their cross sections into four categories. The trained classifiers were used to classify the objects with varying distances from the receiver. The SVM classification is also compared with three methods based on binary classification: a one-against-all classification, a one-against-one classification, and a directed acyclic graph SVM. The level of accuracy is approximately 96.6%, and an F1-score of 96.5% is achieved using the one-against-one SVM method with an RFB kernel. The proposed contactless radar in combination with an SVM algorithm can be used to detect and categorize a target in real time without a signal processing toolbox.


2020 ◽  
Vol 10 (17) ◽  
pp. 5820
Author(s):  
Pengfei Dong ◽  
Guochang Ye ◽  
Mehmet Kaya ◽  
Linxia Gu

In this work, we integrated finite element (FE) method and machine learning (ML) method to predict the stent expansion in a calcified coronary artery. The stenting procedure was captured in a patient-specific artery model, reconstructed based on optical coherence tomography images. Following FE simulation, eight geometrical features in each of 120 cross sections in the pre-stenting artery model, as well as the corresponding post-stenting lumen area, were extracted for training and testing the ML models. A linear regression model and a support vector regression (SVR) model with three different kernels (linear, polynomial, and radial basis function kernels) were adopted in this work. Two subgroups of the eight features, i.e., stretch features and calcification features, were further assessed for the prediction capacity. The influence of the neighboring cross sections on the prediction accuracy was also investigated by averaging each feature over eight neighboring cross sections. Results showed that the SVR models provided better predictions than the linear regression model in terms of bias. In addition, the inclusion of stretch features based on mechanistic understanding could provide a better prediction, compared with the calcification features only. However, there were no statistically significant differences between neighboring cross sections and individual ones in terms of the prediction bias and range of error. The simulation-driven machine learning framework in this work could enhance the mechanistic understanding of stenting in calcified coronary artery lesions, and also pave the way toward precise prediction of stent expansion.


Sign in / Sign up

Export Citation Format

Share Document