Artificial Intelligence Method of Flow Unit Division Based on Waveform Clustering: A Case Study on Zhetybay Oil Field, South Mangyshalk Basin, Kazakhstan

2021 ◽  
Author(s):  
Libing Fu ◽  
Jun Ni ◽  
Yuming Liu ◽  
Xuanran Li ◽  
Anzhu Xu

Abstract The Zhetybay Field is located in the South Mangyshlak Sub-basin, a delta front sedimentary reservoir onshore western Kazakhstan. It was discovered in 1961 and first produced by waterflooding in 1967. After more than 50 years of waterflooding development, the reservoirs are generally in the mid-to-high waterflooded stage and oil-water distribution becomes complicated and chaotic. It is very difficult to handle and identify so much logging data by hand since the oilfield has the characteristics of high-density well pattern and contains rich logging information with more than 2000 wells. The wave clustering method is used to divide the sedimentary rhythm of the logging curve. Sedimentary microfacies manifested as a regression sequence, with four types of composite sand bodies including the composite estuary bar and distributary channel combination, the estuary bar connected to the dam edge and the distributing channel combination, the isolated estuary bar and distributing channel combination, and the isolated beach sand. In order to distinguish the flow units, the artificial intelligence algorithm-support vector machine (SVM) method is established by learning the non-linear relationship between flow unit categories and parameters based on developing flow index and reservoir quality factor, summarizing permeability logarithm and porosity degree parameters in the sedimentary facies, and analyzing the production dynamic. The flow units in Zhetybay oilfield were classified into 4 types: A, B1, B2 and B3, and the latter three are the main types. Type A is distributed in the river, type B1 is distributed in the main body of the dam, type B2 is mainly distributed in the main body of the dam, and some of B2 is distributed in the dam edge, and B3 is located in the dam edge, sheet sand and beach sand. The results show that the accuracy of flow unit division by support vector machines reaches 91.1%, which clarifies the distribution law of flow units for oilfield development. This study is one of the significant keys for locating new wells and optimizing the workovers to increase recoverable reserves. It provides an effective guidance for efficient waterflooding in this oilfield.

2002 ◽  
Vol 5 (02) ◽  
pp. 135-145 ◽  
Author(s):  
G.R. King ◽  
W. David ◽  
T. Tokar ◽  
W. Pape ◽  
S.K. Newton ◽  
...  

Summary This paper discusses the integration of dynamic reservoir data at the flow-unit scale into the reservoir management and reservoir simulation efforts of the Takula field. The Takula field is currently the most prolific oil field in the Republic of Angola. Introduction The Takula field is the largest producing oil field in the Republic of Angola in terms of cumulative oil production. It is situated in the Block 0 Concession of the Angolan province of Cabinda. It is located approximately 25 miles offshore in water depths ranging from 170 to 215 ft. The field consists of seven stacked, Cretaceous reservoirs. The principal oil-bearing horizon is the Upper Vermelha reservoir. This paper discusses the data acquisition and integration for this reservoir only. The reservoir was discovered in January 1980 with Well 57- 02X. Primary production from the reservoir began in December 1982. The reservoir was placed on a peripheral waterflood in December 1990. Currently, the Upper Vermelha reservoir accounts for approximately 75% of the production from the field. Sound management of mature waterfloods has been identified as a key to maximizing the ultimate recovery and delivering the highest value from the Block 0 Asset.1 Therefore, the objective of the simulation effort was to develop a tool for strategic and dayto- day reservoir management with the intent of managing and optimizing production on a flow-unit basis. Typical day-to-day management activities include designing workovers, identifying new well locations, optimizing injection well profiles, and optimizing sweep efficiencies. To perform these activities, decisions must be made at the scale of the individual flow units. In general, fine-grid geostatistical models are developed from static data, such as openhole log data and core data. Recent developments in reservoir characterization have allowed for the incorporation of some dynamic data, such as pressure-transient data and 4D seismic data, into the geostatistical models. Unfortunately, pressure-transient data are acquired at a test-interval scale (there are typically 3 to 4 test intervals per well, depending on the ability to isolate different zones mechanically in the wellbore), while seismic data are acquired at the reservoir scale. The reservoir surveillance program in the Takula field routinely acquires data at the flow-unit scale. These data include openhole log and wireline pressure data from newly drilled wells and casedhole log and production log (PLT) data from producing/injecting wells. Because of the time-lapse nature of cased-hole log and PLT data, they represent dynamic reservoir data at the flow-unit scale. To achieve the objectives of the modeling effort and optimize production on a flow-unit basis, these dynamic data must be incorporated into the simulation model at the appropriate scale. When these data are incorporated into a simulation model, it is typically done during the history match. There are, however, instances when these data are incorporated during other phases of the study. The objective of this paper, therefore, is to discuss the methods used to integrate the dynamic reservoir data acquired at the flow-unit scale into the Upper Vermelha reservoir simulation model. Reservoir Geology The geology of the Takula field is described in detail in Ref. 2. The aspects of the reservoir geology that are pertinent to this paper are elaborated in this section. Reservoir Stratigraphy. The Takula field consists of seven stacked reservoirs. The principal oil-bearing horizon is the Upper Vermelha reservoir. This reservoir contains an undersaturated, 33°API crude oil. For reservoir management purposes, 36 marker surfaces have been identified in the reservoir. Flow units were then identified as reservoir units separated by areally pervasive vertical flow barriers (nonreservoir rock). This resulted in the identification of 20 flow units. The thickness of these flow units ranges from 5 to 15 ft. Reservoir Structure. The reservoir structure is a faulted anticline that is interpreted to be the result of regional salt tectonics. Closure to the reservoir is provided by faults on the southwestern and northern flanks of the structure and by an oil/water contact (OWC) on the eastern, western, and southern flanks of the structure. A structure map of the reservoir is presented in Fig. 1. Data Acquisition in the Takula Field Openhole Log Program. Most original development wells were logged with a basic log suite of resistivity/gamma ray and density/ neutron logs. In addition, the vertical wells drilled from each well jacket were logged with a sonic log and, occasionally, velocity surveys. All wells drilled after 1993 were logged with long spacing sonic and spectral gamma ray logs. In many wells drilled after December 1997, carbon/oxygen (C/O) logs have been run in open hole to distinguish between formation and injected water.3 A few recent wells have been logged with nuclear magnetic resonance (NMR) logs. The NMR log data, when integrated with data from other logs, have been of value in distinguishing free water from bound water, formation water from injection water, and reservoir rock from nonreservoir rock.


2013 ◽  
Vol 690-693 ◽  
pp. 3190-3193
Author(s):  
Yong Chao Xue ◽  
Lin Song Cheng

Geological model controlled by sedimentary microfacies can not able to accurately reflect the actual seepage characteristic, How to apply the static flow units to the dynamic reservoir numerical simulation is the advancing edge of petroleum industry. The method of reservoir geological modeling controlled by flow units is proposed. Firstly, the 3D models of flow unit should be build, secondly, the 3D porosity and permeability model are established controlled by the model of flow unit, thirdly, the 3D fluids saturation model is calculated by Leverett J function based on porosity and permeability mode. Selecting different relative permeability curves according to different flow units in history matching. which realizes the dynamic (development geological) and static (reservoir engineering) combination. The oilfield examples shown that the velocity and precision of history matching can be significantly improved by using the method mentioned. Flow units were proposed by Hearn in 1984[. Study on flow units, which could not only deepen the understanding of reservoir geology, make reservoir evaluation more reasonable and reduce the heterogeneity impact on oil development, but also could be of great significance to improve oil field development effect especially for tertiary oil recovery[. Previous studies mainly focused on defining the concept of flow units, divide method of flow units and so on, but few studies on how to apply the study results of static flow units to the dynamic reservoir engineering and reservoir simulation[3-. Our research aimed at how to connect the static flow units and the dynamic reservoir simulation closely so as to achieve "dynamic and static combination".


2011 ◽  
Vol 130-134 ◽  
pp. 2047-2050 ◽  
Author(s):  
Hong Chun Qu ◽  
Xie Bin Ding

SVM(Support Vector Machine) is a new artificial intelligence methodolgy, basing on structural risk mininization principle, which has better generalization than the traditional machine learning and SVM shows powerfulability in learning with limited samples. To solve the problem of lack of engine fault samples, FLS-SVM theory, an improved SVM, which is a method is applied. 10 common engine faults are trained and recognized in the paper.The simulated datas are generated from PW4000-94 engine influence coefficient matrix at cruise, and the results show that the diagnostic accuracy of FLS-SVM is better than LS-SVM.


2021 ◽  
Vol 15 (6) ◽  
pp. 1812-1819
Author(s):  
Azita Yazdani ◽  
Ramin Ravangard ◽  
Roxana Sharifian

The new coronavirus has been spreading since the beginning of 2020 and many efforts have been made to develop vaccines to help patients recover. It is now clear that the world needs a rapid solution to curb the spread of COVID-19 worldwide with non-clinical approaches such as data mining, enhanced intelligence, and other artificial intelligence techniques. These approaches can be effective in reducing the burden on the health care system to provide the best possible way to diagnose and predict the COVID-19 epidemic. In this study, data mining models for early detection of Covid-19 in patients were developed using the epidemiological dataset of patients and individuals suspected of having Covid-19 in Iran. C4.5, support vector machine, Naive Bayes, logistic regression, Random Forest, and k-nearest neighbor algorithm were used directly on the dataset using Rapid miner to develop the models. By receiving clinical signs, this model diagnosis the risk of contracting the COVID-19 virus. Examination of the models in this study has shown that the support vector machine with 93.41% accuracy is more efficient in the diagnosis of patients with COVID-19 pandemic, which is the best model among other developed models. Keywords: COVID-19, Data mining, Machine Learning, Artificial Intelligence, Classification


2021 ◽  
Author(s):  
Nagaraju Reddicharla ◽  
Subba Ramarao Rachapudi ◽  
Indra Utama ◽  
Furqan Ahmed Khan ◽  
Prabhker Reddy Vanam ◽  
...  

Abstract Well testing is one of the vital process as part of reservoir performance monitoring. As field matures with increase in number of well stock, testing becomes tedious job in terms of resources (MPFM and test separators) and this affect the production quota delivery. In addition, the test data validation and approval follow a business process that needs up to 10 days before to accept or reject the well tests. The volume of well tests conducted were almost 10,000 and out of them around 10 To 15 % of tests were rejected statistically per year. The objective of the paper is to develop a methodology to reduce well test rejections and timely raising the flag for operator intervention to recommence the well test. This case study was applied in a mature field, which is producing for 40 years that has good volume of historical well test data is available. This paper discusses the development of a data driven Well test data analyzer and Optimizer supported by artificial intelligence (AI) for wells being tested using MPFM in two staged approach. The motivating idea is to ingest historical, real-time data, well model performance curve and prescribe the quality of the well test data to provide flag to operator on real time. The ML prediction results helps testing operations and can reduce the test acceptance turnaround timing drastically from 10 days to hours. In Second layer, an unsupervised model with historical data is helping to identify the parameters that affecting for rejection of the well test example duration of testing, choke size, GOR etc. The outcome from the modeling will be incorporated in updating the well test procedure and testing Philosophy. This approach is being under evaluation stage in one of the asset in ADNOC Onshore. The results are expected to be reducing the well test rejection by at least 5 % that further optimize the resources required and improve the back allocation process. Furthermore, real time flagging of the test Quality will help in reduction of validation cycle from 10 days hours to improve the well testing cycle process. This methodology improves integrated reservoir management compliance of well testing requirements in asset where resources are limited. This methodology is envisioned to be integrated with full field digital oil field Implementation. This is a novel approach to apply machine learning and artificial intelligence application to well testing. It maximizes the utilization of real-time data for creating advisory system that improve test data quality monitoring and timely decision-making to reduce the well test rejection.


2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi139-vi139
Author(s):  
Jan Lost ◽  
Tej Verma ◽  
Niklas Tillmanns ◽  
W R Brim ◽  
Harry Subramanian ◽  
...  

Abstract PURPOSE Identifying molecular subtypes in gliomas has prognostic and therapeutic value, traditionally after invasive neurosurgical tumor resection or biopsy. Recent advances using artificial intelligence (AI) show promise in using pre-therapy imaging for predicting molecular subtype. We performed a systematic review of recent literature on AI methods used to predict molecular subtypes of gliomas. METHODS Literature review conforming to PRSIMA guidelines was performed for publications prior to February 2021 using 4 databases: Ovid Embase, Ovid MEDLINE, Cochrane trials (CENTRAL), and Web of Science core-collection. Keywords included: artificial intelligence, machine learning, deep learning, radiomics, magnetic resonance imaging, glioma, and glioblastoma. Non-machine learning and non-human studies were excluded. Screening was performed using Covidence software. Bias analysis was done using TRIPOD guidelines. RESULTS 11,727 abstracts were retrieved. After applying initial screening exclusion criteria, 1,135 full text reviews were performed, with 82 papers remaining for data extraction. 57% used retrospective single center hospital data, 31.6% used TCIA and BRATS, and 11.4% analyzed multicenter hospital data. An average of 146 patients (range 34-462 patients) were included. Algorithms predicting IDH status comprised 51.8% of studies, MGMT 18.1%, and 1p19q 6.0%. Machine learning methods were used in 71.4%, deep learning in 27.4%, and 1.2% directly compared both methods. The most common algorithm for machine learning were support vector machine (43.3%), and for deep learning convolutional neural network (68.4%). Mean prediction accuracy was 76.6%. CONCLUSION Machine learning is the predominant method for image-based prediction of glioma molecular subtypes. Major limitations include limited datasets (60.2% with under 150 patients) and thus limited generalizability of findings. We recommend using larger annotated datasets for AI network training and testing in order to create more robust AI algorithms, which will provide better prediction accuracy to real world clinical datasets and provide tools that can be translated to clinical practice.


2018 ◽  
Vol 4 (10) ◽  
pp. 5
Author(s):  
Smriti Singhatiya ◽  
Dr. Shivnath Ghosh

Now-a-days there is a need to study the nutrient status in lower horizons of the soil. Soil testing has played historical role in evaluating soil fertility maintenance and in sustainable agriculture. Soil testing shall also play its crucial role in precision agriculture. At present there is a need to develop basic inventory as per soil test basis and necessary information has to be built into the system for translating the results of soil test to achieve the crop production goal in new era. To achieve this goal artificial intelligence approach is used for predicting the soil properties.  In this paper for analysing these properties support vector regression (SVR), ensembled regression (ER) and neural network (NN) are used. The performance is evaluated with respect to MSE and RMSE and it is observed that ER outperforms better with respect to SVR and NN.


2015 ◽  
Vol 8 (1) ◽  
pp. 167-171
Author(s):  
Fangfang Wu ◽  
Jinchuan Zhang ◽  
Liuzhong Li ◽  
Jinlong Wu

Tight sand reservoir is usually characterized by high heterogeneity and complex pore structure, which makes the permeability calculation a big challenge and leads to difficulties in reservoir classification and productivity evaluation. First, five different Hydraulic Flow Units and respective Porosity-permeability relations were built based on core dataset from Kekeya block, Tuha Basin; and then with BP Neutron Network method, flow unit was classified for un-cored intervals using normalized logging data, and permeability was calculated accordingly. This improved the accuracy of permeability calculation and helped a lot on un-cored reservoir evaluation. In addition, based on porosity, permeability and flow unit type, a new reservoir grading chart was set up by incorporating the testing or production data, which provides important guidance for productivity prediction and reservoir development.


2019 ◽  
Vol 30 (1) ◽  
pp. 7-8
Author(s):  
Dora Maria Ballesteros

Artificial intelligence (AI) is an interdisciplinary subject in science and engineering that makes it possible for machines to learn from data. Artificial Intelligence applications include prediction, recommendation, classification and recognition, object detection, natural language processing, autonomous systems, among others. The topics of the articles in this special issue include deep learning applied to medicine [1, 3], support vector machine applied to ecosystems [2], human-robot interaction [4], clustering in the identification of anomalous patterns in communication networks [5], expert systems for the simulation of natural disaster scenarios [6], real-time algorithms of artificial intelligence [7] and big data analytics for natural disasters [8].


Sign in / Sign up

Export Citation Format

Share Document