scholarly journals Prediction of Dead Oil Viscosity: Machine Learning vs. Classical Correlations

Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 930
Author(s):  
Fahimeh Hadavimoghaddam ◽  
Mehdi Ostadhassan ◽  
Ehsan Heidaryan ◽  
Mohammad Ali Sadri ◽  
Inna Chapanova ◽  
...  

Dead oil viscosity is a critical parameter to solve numerous reservoir engineering problems and one of the most unreliable properties to predict with classical black oil correlations. Determination of dead oil viscosity by experiments is expensive and time-consuming, which means developing an accurate and quick prediction model is required. This paper implements six machine learning models: random forest (RF), lightgbm, XGBoost, multilayer perceptron (MLP) neural network, stochastic real-valued (SRV) and SuperLearner to predict dead oil viscosity. More than 2000 pressure–volume–temperature (PVT) data were used for developing and testing these models. A huge range of viscosity data were used, from light intermediate to heavy oil. In this study, we give insight into the performance of different functional forms that have been used in the literature to formulate dead oil viscosity. The results show that the functional form f(γAPI,T), has the best performance, and additional correlating parameters might be unnecessary. Furthermore, SuperLearner outperformed other machine learning (ML) algorithms as well as common correlations that are based on the metric analysis. The SuperLearner model can potentially replace the empirical models for viscosity predictions on a wide range of viscosities (any oil type). Ultimately, the proposed model is capable of simulating the true physical trend of the dead oil viscosity with variations of oil API gravity, temperature and shear rate.

2021 ◽  
Vol 143 (11) ◽  
Author(s):  
Zeeshan Tariq ◽  
Mohamed Mahmoud ◽  
Abdulazeez Abdulraheem

Abstract Pressure–volume–temperature (PVT) properties of crude oil are considered the most important properties in petroleum engineering applications as they are virtually used in every reservoir and production engineering calculation. Determination of these properties in the laboratory is the most accurate way to obtain a representative value, at the same time, it is very expensive. However, in the absence of such facilities, other approaches such as analytical solutions and empirical correlations are used to estimate the PVT properties. This study demonstrates the combined use of two machine learning (ML) technique, viz., functional network (FN) coupled with particle swarm optimization (PSO) in predicting the black oil PVT properties such as bubble point pressure (Pb), oil formation volume factor at Pb, and oil viscosity at Pb. This study also proposes new mathematical models derived from the coupled FN-PSO model to estimate these properties. The use of proposed mathematical models does not need any ML engine for the execution. A total of 760 data points collected from the different sources were preprocessed and utilized to build and train the machine learning models. The data utilized covered a wide range of values that are quite reasonable in petroleum engineering applications. The performances of the developed models were tested against the most used empirical correlations. The results showed that the proposed PVT models outperformed previous models by demonstrating an error of up to 2%. The proposed FN-PSO models were also compared with other ML techniques such as an artificial neural network, support vector regression, and adaptive neuro-fuzzy inference system, and the results showed that proposed FN-PSO models outperformed other ML techniques.


2017 ◽  
Vol 231 (11-12) ◽  
Author(s):  
Humbul Suleman ◽  
Abdulhalim Shah Maulud ◽  
Zakaria Man

AbstractA computationally simple thermodynamic framework has been presented to correlate the vapour-liquid equilibria of carbon dioxide absorption in five representative types of alkanolamine mixtures. The proposed model is an extension of modified Kent Eisenberg model for the carbon dioxide loaded aqueous alkanolamine mixtures. The model parameters are regressed on a large experimental data pool of carbon dioxide solubility in aqueous alkanolamine mixtures. The model is applicable to a wide range of temperature (298–393 K), pressure (0.1–6000 kPa) and alkanolamine concentration (0.3–5 M). The correlated results are compared to the experimental values and found to be in good agreement with the average deviations ranging between 6% and 20%. The model results are comparable to other thermodynamic models.


Algorithms ◽  
2020 ◽  
Vol 13 (1) ◽  
pp. 17 ◽  
Author(s):  
Emmanuel Pintelas ◽  
Ioannis E. Livieris ◽  
Panagiotis Pintelas

Machine learning has emerged as a key factor in many technological and scientific advances and applications. Much research has been devoted to developing high performance machine learning models, which are able to make very accurate predictions and decisions on a wide range of applications. Nevertheless, we still seek to understand and explain how these models work and make decisions. Explainability and interpretability in machine learning is a significant issue, since in most of real-world problems it is considered essential to understand and explain the model’s prediction mechanism in order to trust it and make decisions on critical issues. In this study, we developed a Grey-Box model based on semi-supervised methodology utilizing a self-training framework. The main objective of this work is the development of a both interpretable and accurate machine learning model, although this is a complex and challenging task. The proposed model was evaluated on a variety of real world datasets from the crucial application domains of education, finance and medicine. Our results demonstrate the efficiency of the proposed model performing comparable to a Black-Box and considerably outperforming single White-Box models, while at the same time remains as interpretable as a White-Box model.


2020 ◽  
Author(s):  
Moritz Lürig ◽  
Seth Donoughe ◽  
Erik Svensson ◽  
Arthur Porto ◽  
Masahito Tsuboi

For centuries, ecologists and evolutionary biologists have used images such as drawings, paintings, and photographs to record and quantify the shapes and patterns of life. With the advent of digital imaging, biologists continue to collect image data at an ever-increasing rate. This immense body of data provides insight into a wide range of biological phenomena, including phenotypic trait diversity, population dynamics, mechanisms of divergence and adaptation and evolutionary change. However, the rate of image acquisition frequently outpaces our capacity to manually extract meaningful information from the images. Moreover, manual image analysis is low-throughput, difficult to reproduce, and typically measures only a few traits at a time. This has proven to be an impediment to the growing field of phenomics - the study of many phenotypic dimensions together. Computer vision (CV), the automated extraction and processing of information from digital images, is a way to alleviate this longstanding analytical bottleneck. In this review, we illustrate the capabilities of CV for fast, comprehensive, and reproducible image analysis in ecology and evolution. First, we briefly review phenomics, arguing that ecologists and evolutionary biologists can most effectively capture phenomic-level data by using CV. Next, we describe the primary types of image-based data, and review CV approaches for extracting them (including techniques that entail machine learning and others that do not). We identify common hurdles and pitfalls, and then highlight recent successful implementations of CV in the study of ecology and evolution. Finally, we outline promising future applications for CV in biology. We anticipate that CV will become a basic component of the biologist’s toolkit, further enhancing data quality and quantity, and sparking changes in how empirical ecological and evolutionary research will be conducted.


Author(s):  
Farid Ali Mousa ◽  
Ibrahim Eldesouky Fattoh

Motor defects are a major problem affecting millions of people around the world. These individuals suffer from weakness in day-to-day functioning, which can lead to decreased and incoherent daily routines and impair their quality of life. This research describes a new machine learning-based model intended to help individuals with limb motor disabilities using their brain signals to control assistive devices in their daily life activities. The proposed model uses Empirical Mode Decomposition for removing the artifacts of the electroencephalography (EEG) signal, a modified Principal Component Analysis to reduce the input channels, and wavelet transform to extract features. In this experiment, discrete wavelet transform was used to decompose the signal at four levels. The approximate coefficient Ca and all level detail coefficients Cd4, Cd3, Cd2, and Cd1 were used to get the feature vector. All previous coefficients were used as input to Independent Component Analysis for feature reduction. Many amplitude estimators for neurological activities were defined mathematically to get the feature vector; finally, we classified the data using an artificial neural network. The proposed model evaluation was confirmed by testing on three different benchmark datasets, and the resulted accuracy of the proposed model was 88.067%, which outperforms a wide range of many current approaches.


2020 ◽  
Vol 9 (1) ◽  
pp. 1374-1377

Rainfall is one of the major livelihood of this world. Each and every organism in this universe need some of water to order to survive in its own living conditions. As rainfall is the main source of water and its need to agriculture is inevitable, there arises a necessity to analyze the pattern of the rainfall. The main aim of our paper is to predict the rainfall considering various factors like temperature, pressure, cloud cover, wind speed, pollution and precipitation. There are various ideas and new methodologies proposed in order to predict rainfall. But our proposed concept is based on machine learning because of its wide range of development and preferability nowadays. Among the various technologies built in Machine Learning (ML), Feed Forward Neural Network (FFNN) which is the simplest form of Artificial Neural Network (ANN) is preferred because this model learns the complex relationships among the various input parameters and helps to model them easily. Rainfall in our proposed model is predicted using different parameters influencing the rainfall along with their combinations and patterns. The experimental results depicts that the proposed model based on FFNN indicates suitable accuracy.


2016 ◽  
Author(s):  
Yanxin Li ◽  
Joan H. Knoll ◽  
Ruth Wilkins ◽  
Farrah N. Flegal ◽  
Peter K. Rogan

AbstractDose from radiation exposure can be estimated from dicentric chromosome (DC) frequencies in metaphase cells of peripheral blood lymphocytes. We automated DC detection by extracting features in Giemsa-stained metaphase chromosome images and classifying objects by machine learning (ML). DC detection involves i) intensity thresholded segmentation of metaphase objects, ii) chromosome separation by watershed transformation and elimination of inseparable chromosome clusters, fragments and staining debris using a morphological decision tree filter, iii) determination of chromosome width and centreline, iv) derivation of centromere candidates and v) distinction of DCs from monocentric chromosomes (MC) by ML. Centromere candidates are inferred from 14 image features input to a Support Vector Machine (SVM). 16 features derived from these candidates are then supplied to a Boosting classifier and a second SVM which determines whether a chromosome is either a DC or MC. The SVM was trained with 292 DCs and 3135 MCs, and then tested with cells exposed to either low (1 Gy) or high (2-4 Gy) radiation dose. Results were then compared with those of 3 experts. True positive rates (TPR) and positive predictive values (PPV) were determined for the tuning parameter, σ. At larger σ, PPV decreases and TPR increases. At high dose, for σ= 1.3, TPR = 0.52 and PPV = 0.83, while at σ= 1.6, the TPR = 0.65 and PPV = 0.72. At low dose and σ = 1.3, TPR = 0.67 and PPV = 0.26. The algorithm differentiates DCs from MCs, overlapped chromosomes and other objects with acceptable accuracy over a wide range of radiation exposures.


2012 ◽  
Vol 77 (3) ◽  
pp. 371-380 ◽  
Author(s):  
Radun Jeremic ◽  
Jovica Bogdanov

The simple semi-empirical model for calculation of detonation pressure and velocity for CHNO explosives has been developed, which is based on experimental values of detonation parameters. Model uses Avakyan?s method for determination of detonation products' chemical composition, and is applicable in wide range of densities. Compared with the well-known Kamlet's method and numerical model of detonation based on BKW EOS, the calculated values from proposed model have significantly better accuracy.


2020 ◽  
Vol 10 (1) ◽  
pp. 1-17
Author(s):  
Doaa Mohammed Salih ◽  
Sameera M. Hamdalla ◽  
Mohammed H. Al-Kabi

The calculation of the oil density is more complex due to a wide range of pressuresand temperatures, which are always determined by specific conditions, pressure andtemperature. Therefore, the calculations that depend on oil components are moreaccurate and easier in finding such kind of requirements. The analyses of twenty liveoil samples are utilized. The three parameters Peng Robinson equation of state istuned to get match between measured and calculated oil viscosity. The Lohrenz-Bray-Clark (LBC) viscosity calculation technique is adopted to calculate the viscosity of oilfrom the given composition, pressure and temperature for 20 samples. The tunedequation of state is used to generate oil viscosity values for a range of temperature andpressure extends from the reservoir to surface conditions.The generated viscosity data is utilized in the neural network tool (NN) to get fittingmodel correlates the viscosity of oil with composition, pressure and temperature. Theresulted error and the correlation coefficient of the model constructed are close to 0and 1 respectively. The NN model is also tested with data that are not used in set upthe model. The results proved the validity of the model. Moreover, the model’soutcomes demonstrate its superiority to selected empirical correlations.


2020 ◽  
Vol 12 (2) ◽  
pp. 268-279
Author(s):  
Jozo Ištuk ◽  
Drago Šubarić ◽  
Lidija Jakobek

Polyphenols are secondary metabolites of plants, commonly present in the human diet. Since they exhibit a wide range of bioactivities, polyphenols are extensively studied in the fields of nutrition and human health. Current studies have shown a high interest in determining the bioaccessibility of polyphenols, the amount of polyphenols that becomes available for absorption in the digestive tract. Bioaccessibility can be determined with the help of in vitro static gastrointestinal (GI) digestion models. In such a methodology, food samples containing polyphenols are subjected to a series of conditions that mimic the human gastrointestinal tract, with associated parameters. A high number of GI models with slightly different parameters were published. The purpose of this paper is to review the literature, focusing on the determination of polyphenol bioaccessibility and the parameters used in these GI digestion models, such as time, temperature, and pH of digestion, as well as enzyme concentrations. Gastrointestinal digestion models consist of oral, gastric and small intestine phases. These models provide a simple and reliable methodology which enables insight into the amount of bioaccessible polyphenols.


Sign in / Sign up

Export Citation Format

Share Document