Integrating Pore Geometrical Characteristics for Permeability Prediction of Tight Carbonates Utilizing Artificial Intelligence

2021 ◽  
Author(s):  
Mohammad Rasheed Khan ◽  
Shams Kalam ◽  
Asiya Abbasi

Abstract Accurate permeability estimation in tight carbonates is a key reservoir characterization challenge, more pronounced with heterogeneous pore structures. Experiments on large volumes of core samples are required to precisely characterize permeability in such reservoirs which means investment of large amounts of time and capital. Therefore, it is imperative that an integrated model exists that can predict field-wide permeability for un-cored sections to optimize reservoir strategies. Various studies exist with a scope to address this challenge, however, most of them lack universality in application or do not consider important carbonate geometrical features. Accordingly, this work presents a novel correlation to determine permeability of tight carbonates as a function of carbonate pore geometry utilizing a combination of machine learning and optimization algorithms. Primarily, a Deep Learning Neural Network (NN) is constructed and further optimized to produce a data-driven permeability predictor. Customization of the model to tight-heterogenous pore-scale features is accomplished by considering key geometrical carbonate topologies, porosity, formation resistivity, pore cementation representation, characteristic pore throat diameter, pore diameter, and grain diameter. Multiple realizations are conducted spanning from a perceptron-based model to a multi-layered neural net with varying degrees of activation and transfer functions. Next, a physical equation is derived from the optimized model to provide a stand-alone equation for permeability estimation. Validation of the proposed model is conducted by graphical and statistical error analysis of model testing on unseen dataset. A major outcome of this study is the development of a physical mathematical equation which can be used without diving into the intricacy of artificial intelligence algorithms. To evaluate performance of the new correlation, an error metric comprising of average absolute percentage error (AAPE), root mean squared error (RMSE), and correlation coefficient (CC) was used. The proposed correlation performs with low error values and gives CC more than 0.95. A possible reason for this outcome is that the machine learning algorithms can construct relationship between various non-linear inputs (for e.g., carbonate heterogeneity) and output (permeability) parameters through its inbuilt complex interaction of transfer and activation function methodologies.

Author(s):  
M. A. Fesenko ◽  
G. V. Golovaneva ◽  
A. V. Miskevich

The new model «Prognosis of men’ reproductive function disorders» was developed. The machine learning algorithms (artificial intelligence) was used for this purpose, the model has high prognosis accuracy. The aim of the model applying is prioritize diagnostic and preventive measures to minimize reproductive system diseases complications and preserve workers’ health and efficiency.


2020 ◽  
Vol 237 (12) ◽  
pp. 1430-1437
Author(s):  
Achim Langenbucher ◽  
Nóra Szentmáry ◽  
Jascha Wendelstein ◽  
Peter Hoffmann

Abstract Background and Purpose In the last decade, artificial intelligence and machine learning algorithms have been more and more established for the screening and detection of diseases and pathologies, as well as for describing interactions between measures where classical methods are too complex or fail. The purpose of this paper is to model the measured postoperative position of an intraocular lens implant after cataract surgery, based on preoperatively assessed biometric effect sizes using techniques of machine learning. Patients and Methods In this study, we enrolled 249 eyes of patients who underwent elective cataract surgery at Augenklinik Castrop-Rauxel. Eyes were measured preoperatively with the IOLMaster 700 (Carl Zeiss Meditec), as well as preoperatively and postoperatively with the Casia 2 OCT (Tomey). Based on preoperative effect sizes axial length, corneal thickness, internal anterior chamber depth, thickness of the crystalline lens, mean corneal radius and corneal diameter a selection of 17 machine learning algorithms were tested for prediction performance for calculation of internal anterior chamber depth (AQD_post) and axial position of equatorial plane of the lens in the pseudophakic eye (LEQ_post). Results The 17 machine learning algorithms (out of 4 families) varied in root mean squared/mean absolute prediction error between 0.187/0.139 mm and 0.255/0.204 mm (AQD_post) and 0.183/0.135 mm and 0.253/0.206 mm (LEQ_post), using 5-fold cross validation techniques. The Gaussian Process Regression Model using an exponential kernel showed the best performance in terms of root mean squared error for prediction of AQDpost and LEQpost. If the entire dataset is used (without splitting for training and validation data), comparison of a simple multivariate linear regression model vs. the algorithm with the best performance showed a root mean squared prediction error for AQD_post/LEQ_post with 0.188/0.187 mm vs. the best performance Gaussian Process Regression Model with 0.166/0.159 mm. Conclusion In this paper we wanted to show the principles of supervised machine learning applied to prediction of the measured physical postoperative axial position of the intraocular lenses. Based on our limited data pool and the algorithms used in our setting, the benefit of machine learning algorithms seems to be limited compared to a standard multivariate regression model.


mSphere ◽  
2019 ◽  
Vol 4 (3) ◽  
Author(s):  
Artur Yakimovich

ABSTRACT Artur Yakimovich works in the field of computational virology and applies machine learning algorithms to study host-pathogen interactions. In this mSphere of Influence article, he reflects on two papers “Holographic Deep Learning for Rapid Optical Screening of Anthrax Spores” by Jo et al. (Y. Jo, S. Park, J. Jung, J. Yoon, et al., Sci Adv 3:e1700606, 2017, https://doi.org/10.1126/sciadv.1700606) and “Bacterial Colony Counting with Convolutional Neural Networks in Digital Microbiology Imaging” by Ferrari and colleagues (A. Ferrari, S. Lombardi, and A. Signoroni, Pattern Recognition 61:629–640, 2017, https://doi.org/10.1016/j.patcog.2016.07.016). Here he discusses how these papers made an impact on him by showcasing that artificial intelligence algorithms can be equally applicable to both classical infection biology techniques and cutting-edge label-free imaging of pathogens.


2021 ◽  
Vol 10 (2) ◽  
pp. 205846012199029
Author(s):  
Rani Ahmad

Background The scope and productivity of artificial intelligence applications in health science and medicine, particularly in medical imaging, are rapidly progressing, with relatively recent developments in big data and deep learning and increasingly powerful computer algorithms. Accordingly, there are a number of opportunities and challenges for the radiological community. Purpose To provide review on the challenges and barriers experienced in diagnostic radiology on the basis of the key clinical applications of machine learning techniques. Material and Methods Studies published in 2010–2019 were selected that report on the efficacy of machine learning models. A single contingency table was selected for each study to report the highest accuracy of radiology professionals and machine learning algorithms, and a meta-analysis of studies was conducted based on contingency tables. Results The specificity for all the deep learning models ranged from 39% to 100%, whereas sensitivity ranged from 85% to 100%. The pooled sensitivity and specificity were 89% and 85% for the deep learning algorithms for detecting abnormalities compared to 75% and 91% for radiology experts, respectively. The pooled specificity and sensitivity for comparison between radiology professionals and deep learning algorithms were 91% and 81% for deep learning models and 85% and 73% for radiology professionals (p < 0.000), respectively. The pooled sensitivity detection was 82% for health-care professionals and 83% for deep learning algorithms (p < 0.005). Conclusion Radiomic information extracted through machine learning programs form images that may not be discernible through visual examination, thus may improve the prognostic and diagnostic value of data sets.


Author(s):  
Joel Weijia Lai ◽  
Candice Ke En Ang ◽  
U. Rajendra Acharya ◽  
Kang Hao Cheong

Artificial Intelligence in healthcare employs machine learning algorithms to emulate human cognition in the analysis of complicated or large sets of data. Specifically, artificial intelligence taps on the ability of computer algorithms and software with allowable thresholds to make deterministic approximate conclusions. In comparison to traditional technologies in healthcare, artificial intelligence enhances the process of data analysis without the need for human input, producing nearly equally reliable, well defined output. Schizophrenia is a chronic mental health condition that affects millions worldwide, with impairment in thinking and behaviour that may be significantly disabling to daily living. Multiple artificial intelligence and machine learning algorithms have been utilized to analyze the different components of schizophrenia, such as in prediction of disease, and assessment of current prevention methods. These are carried out in hope of assisting with diagnosis and provision of viable options for individuals affected. In this paper, we review the progress of the use of artificial intelligence in schizophrenia.


2021 ◽  
Vol 10 (4) ◽  
pp. 58-75
Author(s):  
Vivek Sen Saxena ◽  
Prashant Johri ◽  
Avneesh Kumar

Skin lesion melanoma is the deadliest type of cancer. Artificial intelligence provides the power to classify skin lesions as melanoma and non-melanoma. The proposed system for melanoma detection and classification involves four steps: pre-processing, resizing all the images, removing noise and hair from dermoscopic images; image segmentation, identifying the lesion area; feature extraction, extracting features from segmented lesion and classification; and categorizing lesion as malignant (melanoma) and benign (non-melanoma). Modified GrabCut algorithm is employed to generate skin lesion. Segmented lesions are classified using machine learning algorithms such as SVM, k-NN, ANN, and logistic regression and evaluated on performance metrics like accuracy, sensitivity, and specificity. Results are compared with existing systems and achieved higher similarity index and accuracy.


2021 ◽  
pp. 1-16
Author(s):  
Kevin Kloos

The use of machine learning algorithms at national statistical institutes has increased significantly over the past few years. Applications range from new imputation schemes to new statistical output based entirely on machine learning. The results are promising, but recent studies have shown that the use of machine learning in official statistics always introduces a bias, known as misclassification bias. Misclassification bias does not occur in traditional applications of machine learning and therefore it has received little attention in the academic literature. In earlier work, we have collected existing methods that are able to correct misclassification bias. We have compared their statistical properties, including bias, variance and mean squared error. In this paper, we present a new generic method to correct misclassification bias for time series and we derive its statistical properties. Moreover, we show numerically that it has a lower mean squared error than the existing alternatives in a wide variety of settings. We believe that our new method may improve machine learning applications in official statistics and we aspire that our work will stimulate further methodological research in this area.


2021 ◽  
Vol 201 (3) ◽  
pp. 507-518
Author(s):  
Łukasz Osuszek ◽  
Stanisław Stanek

The paper outlines the recent trends in the evolution of Business Process Management (BPM) – especially the application of AI for decision support. AI has great potential to augment human judgement. Indeed, Machine Learning might be considered as a supplementary and complimentary solution to enhance and support human productivity throughout all aspects of personal and professional life. The idea of merging technologies for organizational learning and workflow management was first put forward by Wargitsch. Herein, completed business cases stored in an organizational memory are used to configure new workflows, while the selection of an appropriate historical case is supported by a case-based reasoning component. This informational environment has been recognized in the world as being effective and has become quite common because of the significant increase in the use of artificial intelligence tools. This article discusses also how automated planning techniques (one of the oldest areas in AI) can be used to enable a new level of automation and processing support. The authors of the article decided to analyse this topic and discuss the scientific state of the art and the application of AI in BPM systems for decision-making support. It should be noted that readily available software exists for the needs of the development of such systems in the field of artificial intelligence. The paper also includes a unique case study with production system of Decision Support, using controlled machine learning algorithms to predictive analytical models.


2020 ◽  
Vol 5 (19) ◽  
pp. 32-35
Author(s):  
Anand Vijay ◽  
Kailash Patidar ◽  
Manoj Yadav ◽  
Rishi Kushwah

In this paper an analytical survey on the role of machine learning algorithms in case of intrusion detection has been presented and discussed. This paper shows the analytical aspects in the development of efficient intrusion detection system (IDS). The related study for the development of this system has been presented in terms of computational methods. The discussed methods are data mining, artificial intelligence and machine learning. It has been discussed along with the attack parameters and attack types. This paper also elaborates the impact of different attack and handling mechanism based on the previous papers.


Author(s):  
Kalva Sindhu Priya

Abstract: In the present scenario, it is quite aware that almost every field is moving into machine based automation right from fundamentals to master level systems. Among them, Machine Learning (ML) is one of the important tool which is most similar to Artificial Intelligence (AI) by allowing some well known data or past experience in order to improve automatically or estimate the behavior or status of the given data through various algorithms. Modeling a system or data through Machine Learning is important and advantageous as it helps in the development of later and newer versions. Today most of the information technology giants such as Facebook, Uber, Google maps made Machine learning as a critical part of their ongoing operations for the better view of users. In this paper, various available algorithms in ML is given briefly and out of all the existing different algorithms, Linear Regression algorithm is used to predict a new set of values by taking older data as reference. However, a detailed predicted model is discussed clearly by building a code with the help of Machine Learning and Deep Learning tool in MATLAB/ SIMULINK. Keywords: Machine Learning (ML), Linear Regression algorithm, Curve fitting, Root Mean Squared Error


Sign in / Sign up

Export Citation Format

Share Document