scholarly journals Using artificial intelligence methods for shear travel time prediction: A case study of Facha member, Sirte basin, Libya

2021 ◽  
Author(s):  
Bahia M. Ben Ghawar ◽  
◽  
Moncef Zairi ◽  
Samir Bouaziz ◽  
◽  
...  

Shear wave travel time logs are major acoustic logs used for direct estimation of the mechanical properties of rocks. They are also important for prediction of critical drawdown pressure of the reservoir. However, core samples are sometimes not available for direct laboratory measurements, and the time-consuming dipole shear imager tool is generally not used. Hence, there is a need for simple indirect techniques that can be used reliably. In this study, cross-plots between the available measured shear travel time and compressional travel time from three oil wells were used, and three artificial intelligence tools (fuzzy logic, multiple linear regression and neural networks) were applied to predict the shear travel time of Facha member (Gir Formation, Lower Eocene) in Sirte Basin, Libya. The predicted times were compared to those obtained by the equation of Brocher. The basic wireline data (gamma ray, neutron porosity, bulk density and compression travel time) of five oil wells were used. Based on principle component analysis, two wireline data sets were chosen to build intelligent models for the prediction of shear travel time. Limestone, dolomite, dolomitic limestone and anhydrite are the main lithofacies in the Facha member, with an average thickness of about 66 m. The simple equation gave 87% goodness of fit, which is considered comparable to the measured shear travel time logs. The Brocher equation yielded adequate results, of which the most accurate was for the Facha member in the eastern part of the Sirte basin. On the other hand, the three intelligent tools’ predictions of shear travel time conformed with the measured log, except in the eastern area of the basin.

Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 106
Author(s):  
Irfan Ahmed ◽  
Indika Kumara ◽  
Vahideh Reshadat ◽  
A. S. M. Kayes ◽  
Willem-Jan van den Heuvel ◽  
...  

Travel time information is used as input or auxiliary data for tasks such as dynamic navigation, infrastructure planning, congestion control, and accident detection. Various data-driven Travel Time Prediction (TTP) methods have been proposed in recent years. One of the most challenging tasks in TTP is developing and selecting the most appropriate prediction algorithm. The existing studies that empirically compare different TTP models only use a few models with specific features. Moreover, there is a lack of research on explaining TTPs made by black-box models. Such explanations can help to tune and apply TTP methods successfully. To fill these gaps in the current TTP literature, using three data sets, we compare three types of TTP methods (ensemble tree-based learning, deep neural networks, and hybrid models) and ten different prediction algorithms overall. Furthermore, we apply XAI (Explainable Artificial Intelligence) methods (SHAP and LIME) to understand and interpret models’ predictions. The prediction accuracy and reliability for all models are evaluated and compared. We observed that the ensemble learning methods, i.e., XGBoost and LightGBM, are the best performing models over the three data sets, and XAI methods can adequately explain how various spatial and temporal features influence travel time.


Author(s):  
Bhaven Naik ◽  
Laurence R. Rilett ◽  
Justice Appiah ◽  
Lubinda F. Walubita

To a large extent, methods of forecasting travel time have placed emphasis on the quality of the forecasted value—how close is the forecast point estimate of the mean travel time to its respective field value? However, understanding the reliability or uncertainty margin that exists around the forecasted point estimate is also important. Uncertainty about travel time is a fundamental factor as it leads end-users to change their routes and schedules even when the average travel time is low. Statistical resampling methods have been used previously for uncertainty modeling within the travel time prediction environment. This paper applies a recently developed nonparametric resampling method, the gap bootstrap, to the travel time uncertainty estimation problem, especially as it pertains to large (probe) data sets for which common resampling methods may not be practical because of the possible computational burden and complex patterns of inhomogeneity. The gap bootstrap partitions the original data into smaller groups of approximately uniform data sets and recombines individual group uncertainty estimates into a single estimate of uncertainty. Results of the gap bootstrap uncertainty estimates are compared with those of two popular resampling methods—the traditional bootstrap and the block bootstrap. The results suggest that, for the datasets used in this research, the gap bootstrap adequately captures the dependent structure when compared with the traditional and block bootstrap methods and may thus yield more credible estimates of uncertainty than either the block bootstrap method or the traditional bootstrap method.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Shanshan Liu ◽  
Yipeng Zhao ◽  
Zhiming Wang

The existing artificial intelligence model uses single-point logging data as the eigenvalue to predict shear wave travel times (DTS), which does not consider the longitudinal continuity of logging data along the reservoir and lacks the multiwell data processing method. Low prediction accuracy of shear wave travel time affects the accuracy of elastic parameters and results in inaccurate sand production prediction. This paper establishes the shear wave prediction model based on the standardization, normalization, and depth correction of conventional logging data with five artificial intelligence methods (linear regression, random forest, support vector regression, XGBoost, and ANN). The adjacent data points in depth are used as machine learning eigenvalues to improve the practicability of interwell and the accuracy of single-well prediction. The results show that the model built with XGBoost using five points outperforms other models in predicting. The R2 of 0.994 and 0.964 are obtained for the training set and testing set, respectively. Every model considering reservoir vertical geological continuity predicts test set DTS with higher accuracy than single-point prediction. The developed model provides a tool to determine geomechanical parameters and give a preliminary suggestion on the possibility of sand production where shear wave travel times are not available. The implementation of the model provides an economic and reliable alternative for the oil and gas industry.


Econometrics ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 10
Author(s):  
Šárka Hudecová ◽  
Marie Hušková ◽  
Simos G. Meintanis

This article considers goodness-of-fit tests for bivariate INAR and bivariate Poisson autoregression models. The test statistics are based on an L2-type distance between two estimators of the probability generating function of the observations: one being entirely nonparametric and the second one being semiparametric computed under the corresponding null hypothesis. The asymptotic distribution of the proposed tests statistics both under the null hypotheses as well as under alternatives is derived and consistency is proved. The case of testing bivariate generalized Poisson autoregression and extension of the methods to dimension higher than two are also discussed. The finite-sample performance of a parametric bootstrap version of the tests is illustrated via a series of Monte Carlo experiments. The article concludes with applications on real data sets and discussion.


2021 ◽  
pp. 002203452110138
Author(s):  
C.M. Mörch ◽  
S. Atsu ◽  
W. Cai ◽  
X. Li ◽  
S.A. Madathil ◽  
...  

Dentistry increasingly integrates artificial intelligence (AI) to help improve the current state of clinical dental practice. However, this revolutionary technological field raises various complex ethical challenges. The objective of this systematic scoping review is to document the current uses of AI in dentistry and the ethical concerns or challenges they imply. Three health care databases (MEDLINE [PubMed], SciVerse Scopus, and Cochrane Library) and 2 computer science databases (ArXiv, IEEE Xplore) were searched. After identifying 1,553 records, the documents were filtered, and a full-text screening was performed. In total, 178 studies were retained and analyzed by 8 researchers specialized in dentistry, AI, and ethics. The team used Covidence for data extraction and Dedoose for the identification of ethics-related information. PRISMA guidelines were followed. Among the included studies, 130 (73.0%) studies were published after 2016, and 93 (52.2%) were published in journals specialized in computer sciences. The technologies used were neural learning techniques for 75 (42.1%), traditional learning techniques for 76 (42.7%), or a combination of several technologies for 20 (11.2%). Overall, 7 countries contributed to 109 (61.2%) studies. A total of 53 different applications of AI in dentistry were identified, involving most dental specialties. The use of initial data sets for internal validation was reported in 152 (85.4%) studies. Forty-five ethical issues (related to the use AI in dentistry) were reported in 22 (12.4%) studies around 6 principles: prudence (10 times), equity (8), privacy (8), responsibility (6), democratic participation (4), and solidarity (4). The ratio of studies mentioning AI-related ethical issues has remained similar in the past years, showing that there is no increasing interest in the field of dentistry on this topic. This study confirms the growing presence of AI in dentistry and highlights a current lack of information on the ethical challenges surrounding its use. In addition, the scarcity of studies sharing their code could prevent future replications. The authors formulate recommendations to contribute to a more responsible use of AI technologies in dentistry.


Author(s):  
Daniel Overhoff ◽  
Peter Kohlmann ◽  
Alex Frydrychowicz ◽  
Sergios Gatidis ◽  
Christian Loewe ◽  
...  

Purpose The DRG-ÖRG IRP (Deutsche Röntgengesellschaft-Österreichische Röntgengesellschaft international radiomics platform) represents a web-/cloud-based radiomics platform based on a public-private partnership. It offers the possibility of data sharing, annotation, validation and certification in the field of artificial intelligence, radiomics analysis, and integrated diagnostics. In a first proof-of-concept study, automated myocardial segmentation and automated myocardial late gadolinum enhancement (LGE) detection using radiomic image features will be evaluated for myocarditis data sets. Materials and Methods The DRG-ÖRP IRP can be used to create quality-assured, structured image data in combination with clinical data and subsequent integrated data analysis and is characterized by the following performance criteria: Possibility of using multicentric networked data, automatically calculated quality parameters, processing of annotation tasks, contour recognition using conventional and artificial intelligence methods and the possibility of targeted integration of algorithms. In a first study, a neural network pre-trained using cardiac CINE data sets was evaluated for segmentation of PSIR data sets. In a second step, radiomic features were applied for segmental detection of LGE of the same data sets, which were provided multicenter via the IRP. Results First results show the advantages (data transparency, reliability, broad involvement of all members, continuous evolution as well as validation and certification) of this platform-based approach. In the proof-of-concept study, the neural network demonstrated a Dice coefficient of 0.813 compared to the expert's segmentation of the myocardium. In the segment-based myocardial LGE detection, the AUC was 0.73 and 0.79 after exclusion of segments with uncertain annotation.The evaluation and provision of the data takes place at the IRP, taking into account the FAT (fairness, accountability, transparency) and FAIR (findable, accessible, interoperable, reusable) criteria. Conclusion It could be shown that the DRG-ÖRP IRP can be used as a crystallization point for the generation of further individual and joint projects. The execution of quantitative analyses with artificial intelligence methods is greatly facilitated by the platform approach of the DRG-ÖRP IRP, since pre-trained neural networks can be integrated and scientific groups can be networked.In a first proof-of-concept study on automated segmentation of the myocardium and automated myocardial LGE detection, these advantages were successfully applied.Our study shows that with the DRG-ÖRP IRP, strategic goals can be implemented in an interdisciplinary way, that concrete proof-of-concept examples can be demonstrated, and that a large number of individual and joint projects can be realized in a participatory way involving all groups. Key Points:  Citation Format


2021 ◽  
Vol 5 (1) ◽  
pp. 10
Author(s):  
Mark Levene

A bootstrap-based hypothesis test of the goodness-of-fit for the marginal distribution of a time series is presented. Two metrics, the empirical survival Jensen–Shannon divergence (ESJS) and the Kolmogorov–Smirnov two-sample test statistic (KS2), are compared on four data sets—three stablecoin time series and a Bitcoin time series. We demonstrate that, after applying first-order differencing, all the data sets fit heavy-tailed α-stable distributions with 1<α<2 at the 95% confidence level. Moreover, ESJS is more powerful than KS2 on these data sets, since the widths of the derived confidence intervals for KS2 are, proportionately, much larger than those of ESJS.


Sign in / Sign up

Export Citation Format

Share Document