Well Interference Detection from Long-Term Pressure Data Using Machine Learning and Multiresolution Analysis

2021 ◽  
Author(s):  
Dante Orta Aleman ◽  
Roland Horne

Abstract Knowledge of reservoir heterogeneity and connectivity is fundamental for reservoir management. Methods such as interference tests or tracers have been developed to obtain that knowledge from dynamic data. However, detecting well connectivity using interference tests requires long periods of time with a stable reservoir pressure and constant flow-rate conditions. Conversely, the long duration and high frequency of well production data have high value for detecting connectivity if noise, abrupt changes in flow-rate and missing data are dealt with. In this work, a methodology to detect interference from longterm pressure and flow-rate data was developed using multiresolution analysis in combination with machine learning algorithms. The methodology presents high accuracy and robustness to noise while requiring little to no data preprocessing. The methodology builds on previous work using the Maximal Overlap Wavelet Transform (MODWT) to analyze long-term pressure data. The new approach uses the ability of the MODWT to capture, synthesize and discriminate the relevant reservoir response for each individual well at different time scales while still honoring the relevant flow-physics. By first applying the MODWT to the flow rate history, a machine learning algorithm was used to estimate the pressure response of each well as it would be in isolation. Interference can be detected by comparing the output of the machine learning model with the unprocessed pressure data. A set of machine learning, and deep learning algorithms were tested including Kernel Ridge Regression, Lasso Regression and Recurrent Neural Networks. The machine learning models were able to detect interference at different distances even with the presence of high noise and missing data. The results were validated by comparing the machine learning output with the theoretical pressure response of wells in isolation. Additionally, it was proved that applying the MODWT multiresolution analysis to pressure and flow-rate data creates a set of "virtual wells" that still follow the diffusion equation and allow for a simplified analysis. By using production data, the proposed methodology allows for the detection of interference effects without the need of a stabilized pressure field. This allows for a significant cost reduction and no operational overhead because the detection does not require well shut-ins and it can be done regardless of operation opportunities or project objectives. Additionally, the long-term nature of production data can detect connectivity even at long distances even in the presence of noise and incomplete data.

Author(s):  
Leo Peach ◽  
N. Carthwright ◽  
D. Strauss

Wave monitoring is a time consuming and costly endeavour which, despite best e orts, can be subject to occasional periods of missing data. This paper investigates the application of machine learning to create "virtual" wave height (Hs), period (Tz) and direction (Dp) parameters. Two supervised machine learning algorithms were applied using long term wave parameter datasets sourced from four wave monitoring stations in relatively close geographic proximity. The machine learning algorithms demonstrated reasonable performance for some parameters through testing, with Hs performing best overall followed closely by Tz; Dp was the most challenging to predict and performed relatively the poorest. The creation of such "virtual" wave monitoring stations could be used to hindcast wave conditions, fill observation gaps or extend data beyond that collected by the physical instrument.Recorded Presentation from the vICCE (YouTube Link): https://youtu.be/GM3EG2_SQa0


2021 ◽  
Vol 81 ◽  
pp. 102047
Author(s):  
Abouzar Rajabi Behesht Abad ◽  
Pezhman Soltani Tehrani ◽  
Mohammad Naveshki ◽  
Hamzeh Ghorbani ◽  
Nima Mohamadian ◽  
...  

2020 ◽  
Vol 10 (14) ◽  
pp. 5020
Author(s):  
Youngdoo Son ◽  
Wonjoon Kim

Estimating stature is essential in the process of personal identification. Because it is difficult to find human remains intact at crime scenes and disaster sites, for instance, methods are needed for estimating stature based on different body parts. For instance, the upper and lower limbs may vary depending on ancestry and sex, and it is of great importance to design adequate methodology for incorporating these in estimating stature. In addition, it is necessary to use machine learning rather than simple linear regression to improve the accuracy of stature estimation. In this study, the accuracy of statures estimated based on anthropometric data was compared using three imputation methods. In addition, by comparing the accuracy among linear and nonlinear classification methods, the best method was derived for estimating stature based on anthropometric data. For both sexes, multiple imputation was superior when the missing data ratio was low, and mean imputation performed well when the ratio was high. The support vector machine recorded the highest accuracy in all ratios of missing data. The findings of this study showed appropriate imputation methods for estimating stature with missing anthropometric data. In particular, the machine learning algorithms can be effectively used for estimating stature in humans.


2020 ◽  
Vol 12 (15) ◽  
pp. 5972
Author(s):  
Nicholas Fiorentini ◽  
Massimo Losa

Screening procedures in road blackspot detection are essential tools for road authorities for quickly gathering insights on the safety level of each road site they manage. This paper suggests a road blackspot screening procedure for two-lane rural roads, relying on five different machine learning algorithms (MLAs) and real long-term traffic data. The network analyzed is the one managed by the Tuscany Region Road Administration, mainly composed of two-lane rural roads. An amount of 995 road sites, where at least one accident occurred in 2012–2016, have been labeled as “Accident Case”. Accordingly, an equal number of sites where no accident occurred in the same period, have been randomly selected and labeled as “Non-Accident Case”. Five different MLAs, namely Logistic Regression, Classification and Regression Tree, Random Forest, K-Nearest Neighbor, and Naïve Bayes, have been trained and validated. The output response of the MLAs, i.e., crash occurrence susceptibility, is a binary categorical variable. Therefore, such algorithms aim to classify a road site as likely safe (“Accident Case”) or potentially susceptible to an accident occurrence (“Non-Accident Case”) over five years. Finally, algorithms have been compared by a set of performance metrics, including precision, recall, F1-score, overall accuracy, confusion matrix, and the Area Under the Receiver Operating Characteristic. Outcomes show that the Random Forest outperforms the other MLAs with an overall accuracy of 73.53%. Furthermore, all the MLAs do not show overfitting issues. Road authorities could consider MLAs to draw up a priority list of on-site inspections and maintenance interventions.


Cancers ◽  
2019 ◽  
Vol 11 (5) ◽  
pp. 606 ◽  
Author(s):  
Pablo Sala Elarre ◽  
Esther Oyaga-Iriarte ◽  
Kenneth H. Yu ◽  
Vicky Baudin ◽  
Leire Arbea Moreno ◽  
...  

Background: Although surgical resection is the only potentially curative treatment for pancreatic cancer (PC), long-term outcomes of this treatment remain poor. The aim of this study is to describe the feasibility of a neoadjuvant treatment with induction polychemotherapy (IPCT) followed by chemoradiation (CRT) in resectable PC, and to develop a machine-learning algorithm to predict risk of relapse. Methods: Forty patients with resectable PC treated in our institution with IPCT (based on mFOLFOXIRI, GEMOX or GEMOXEL) followed by CRT (50 Gy and concurrent Capecitabine) were retrospectively analyzed. Additionally, clinical, pathological and analytical data were collected in order to perform a 2-year relapse-risk predictive population model using machine-learning techniques. Results: A R0 resection was achieved in 90% of the patients. After a median follow-up of 33.5 months, median progression-free survival (PFS) was 18 months and median overall survival (OS) was 39 months. The 3 and 5-year actuarial PFS were 43.8% and 32.3%, respectively. The 3 and 5-year actuarial OS were 51.5% and 34.8%, respectively. Forty-percent of grade 3-4 IPCT toxicity, and 29.7% of grade 3 CRT toxicity were reported. Considering the use of granulocyte colony-stimulating factors, the number of resected lymph nodes, the presence of perineural invasion and the surgical margin status, a logistic regression algorithm predicted the individual 2-year relapse-risk with an accuracy of 0.71 (95% confidence interval [CI] 0.56–0.84, p = 0.005). The model-predicted outcome matched 64% of the observed outcomes in an external dataset. Conclusion: An intensified multimodal neoadjuvant approach (IPCT + CRT) in resectable PC is feasible, with an encouraging long-term outcome. Machine-learning algorithms might be a useful tool to predict individual risk of relapse. A small sample size and therapy heterogeneity remain as potential limitations.


Materials ◽  
2020 ◽  
Vol 13 (18) ◽  
pp. 4133
Author(s):  
Seungbum Koo ◽  
Jongkwon Choi ◽  
Changhyuk Kim

Soundproofing materials are widely used within structural components of multi-dwelling residential buildings to alleviate neighborhood noise problems. One of the critical mechanical properties for the soundproofing materials to ensure its appropriate structural and soundproofing performance is the long-term compressive deformation under the service loading conditions. The test method in the current test specifications only evaluates resilient materials for a limited period (90-day). It then extrapolates the test results using a polynomial function to predict the long-term compressive deformation. However, the extrapolation is universally applied to materials without considering the level of loads; thus, the calculated deformation may not accurately represent the actual compressive deformation of the materials. In this regard, long-term compressive deformation tests were performed on the selected soundproofing resilient materials (i.e., polystyrene, polyethylene, and ethylene-vinyl acetate). Four levels of loads were chosen to apply compressive loads up to 350 to 500 days continuously, and the deformations of the test specimens were periodically monitored. Then, three machine learning algorithms were used to predict long-term compressive deformations. The predictions based on machine learning and ISO 20392 method are compared with experimental test results, and the accuracy of machine learning algorithms and ISO 20392 method are discussed.


Sign in / Sign up

Export Citation Format

Share Document