Drilling in Slips: Strategies to Measure the Invisible Lost Time, Technical Limits Definition, Using Standard Analytics and Machine Learning Algorithms

2021 ◽  
Author(s):  
Silvia Mora ◽  
Damian Martinez

Abstract Drilling is probably the most critical, complex, and costly operation in the oil and gas industry and unfortunately, errors made during the activities related are very expensive. Therefore, inefficient drilling activities such as connection duration outside of optimal times can have a considerable financial impact, so there is always a need to improve drilling efficiency. It is for this fact, that the measure of different behaviors and the duration of the drilling activities represent a significant opportunity in order to maximize the cost saving per well or campaign. Reducing the cost impact and maximizing the drilling efficiency are defined by the way used to calculate the perfect well time by the technical limit, non-productive time (NPT), and invisible lost time (ILT), in an operating company drilling plan. Different approaches to measure the invisible lost time that could be present in the in slips activity on the drilling operation are compared. Results show the differences between multiple techniques applied in real environments coming from a cloud platform. The methodologies implemented are based on the following scenarios, the first one use a combination of a custom technical limit based on technical experience, the historical data limit using standard measures (mean, average, quartiles, standard deviation, etc.), and a depth range variable (phases) differentiation, initial, intermediate, and final hole sizes is used. A complexity comparison uses the rig stand and phase footage variables for base line (count and duration) definition per phase, the non-productive time activities exclusion and data replace techniques mixing with an out of standard time detection in slips behavior (motor assemblies, bit replacing, bottom hole assembly (BHA), etc.) using standard and machine learning mechanisms. A final methodology implements an in slip ILT by technical limit definition using machine learning. The results using the same data set (set of wells) and coming from the different methods has been evaluated according to the total invisible lost time calculated per phase, percentage of activities evaluated with invisible lost time per phase and the variation of ILT considering the activities defining the technical limit. Finally, the potential implementation by any operator can be evaluated for these methodologies according to their specific requirements. This analysis creates a guideline to operating companies about multiple techniques to calculate ILT, some using innovative procedures applied on machine learning models.

2020 ◽  
Vol 9 (3) ◽  
pp. 34
Author(s):  
Giovanna Sannino ◽  
Ivanoe De Falco ◽  
Giuseppe De Pietro

One of the most important physiological parameters of the cardiovascular circulatory system is Blood Pressure. Several diseases are related to long-term abnormal blood pressure, i.e., hypertension; therefore, the early detection and assessment of this condition are crucial. The identification of hypertension, and, even more the evaluation of its risk stratification, by using wearable monitoring devices are now more realistic thanks to the advancements in Internet of Things, the improvements of digital sensors that are becoming more and more miniaturized, and the development of new signal processing and machine learning algorithms. In this scenario, a suitable biomedical signal is represented by the PhotoPlethysmoGraphy (PPG) signal. It can be acquired by using a simple, cheap, and wearable device, and can be used to evaluate several aspects of the cardiovascular system, e.g., the detection of abnormal heart rate, respiration rate, blood pressure, oxygen saturation, and so on. In this paper, we take into account the Cuff-Less Blood Pressure Estimation Data Set that contains, among others, PPG signals coming from a set of subjects, as well as the Blood Pressure values of the latter that is the hypertension level. Our aim is to investigate whether or not machine learning methods applied to these PPG signals can provide better results for the non-invasive classification and evaluation of subjects’ hypertension levels. To this aim, we have availed ourselves of a wide set of machine learning algorithms, based on different learning mechanisms, and have compared their results in terms of the effectiveness of the classification obtained.


2021 ◽  
Vol 1 ◽  
pp. 1183-1192
Author(s):  
Sebastian Bickel ◽  
Benjamin Schleich ◽  
Sandro Wartzack

AbstractData-driven methods from the field of Artificial Intelligence or Machine Learning are increasingly applied in mechanical engineering. This refers to the development of digital engineering in recent years, which aims to bring these methods into practice in order to realize cost and time savings. However, a necessary step towards the implementation of such methods is the utilization of existing data. This problem is essential because the mere availability of data does not automatically imply data usability. Therefore, this paper presents a method to automatically recognize symbols from principle sketches, which allows the generation of training data for machine learning algorithms. In this approach, the symbols are created randomly and their illustration varies with each generation. . A deep learning network from the field of computer vision is used to test the generated data set and thus to recognize symbols on principle sketches. This type of drawing is especially interesting because the cost-saving potential is very high due to the application in the early phases of the product development process.


2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Zhikuan Zhao ◽  
Jack K. Fitzsimons ◽  
Patrick Rebentrost ◽  
Vedran Dunjko ◽  
Joseph F. Fitzsimons

AbstractMachine learning has recently emerged as a fruitful area for finding potential quantum computational advantage. Many of the quantum-enhanced machine learning algorithms critically hinge upon the ability to efficiently produce states proportional to high-dimensional data points stored in a quantum accessible memory. Even given query access to exponentially many entries stored in a database, the construction of which is considered a one-off overhead, it has been argued that the cost of preparing such amplitude-encoded states may offset any exponential quantum advantage. Here we prove using smoothed analysis that if the data analysis algorithm is robust against small entry-wise input perturbation, state preparation can always be achieved with constant queries. This criterion is typically satisfied in realistic machine learning applications, where input data is subjective to moderate noise. Our results are equally applicable to the recent seminal progress in quantum-inspired algorithms, where specially constructed databases suffice for polylogarithmic classical algorithm in low-rank cases. The consequence of our finding is that for the purpose of practical machine learning, polylogarithmic processing time is possible under a general and flexible input model with quantum algorithms or quantum-inspired classical algorithms in the low-rank cases.


2021 ◽  
Vol 30 (1) ◽  
pp. 460-469
Author(s):  
Yinying Cai ◽  
Amit Sharma

Abstract In the agriculture development and growth, the efficient machinery and equipment plays an important role. Various research studies are involved in the implementation of the research and patents to aid the smart agriculture and authors and reviewers that machine leaning technologies are providing the best support for this growth. To explore machine learning technology and machine learning algorithms, the most of the applications are studied based on the swarm intelligence optimization. An optimized V3CFOA-RF model is built through V3CFOA. The algorithm is tested in the data set collected concerning rice pests, later analyzed and compared in detail with other existing algorithms. The research result shows that the model and algorithm proposed are not only more accurate in recognition and prediction, but also solve the time lagging problem to a degree. The model and algorithm helped realize a higher accuracy in crop pest prediction, which ensures a more stable and higher output of rice. Thus they can be employed as an important decision-making instrument in the agricultural production sector.


Author(s):  
Aska E. Mehyadin ◽  
Adnan Mohsin Abdulazeez ◽  
Dathar Abas Hasan ◽  
Jwan N. Saeed

The bird classifier is a system that is equipped with an area machine learning technology and uses a machine learning method to store and classify bird calls. Bird species can be known by recording only the sound of the bird, which will make it easier for the system to manage. The system also provides species classification resources to allow automated species detection from observations that can teach a machine how to recognize whether or classify the species. Non-undesirable noises are filtered out of and sorted into data sets, where each sound is run via a noise suppression filter and a separate classification procedure so that the most useful data set can be easily processed. Mel-frequency cepstral coefficient (MFCC) is used and tested through different algorithms, namely Naïve Bayes, J4.8 and Multilayer perceptron (MLP), to classify bird species. J4.8 has the highest accuracy (78.40%) and is the best. Accuracy and elapsed time are (39.4 seconds).


2021 ◽  
Vol 3 (2) ◽  
pp. 43-50
Author(s):  
Safa SEN ◽  
Sara Almeida de Figueiredo

Predicting bank failures has been an essential subject in literature due to the significance of the banks for the economic prosperity of a country. Acting as an intermediary player of the economy, banks channel funds between creditors and debtors. In that matter, banks are considered the backbone of the economies; hence, it is important to create early warning systems that identify insolvent banks from solvent ones. Thus, Insolvent banks can apply for assistance and avoid bankruptcy in financially turbulent times. In this paper, we will focus on two different machine learning disciplines: Boosting and Cost-Sensitive methods to predict bank failures. Boosting methods are widely used in the literature due to their better prediction capability. However, Cost-Sensitive Forest is relatively new to the literature and originally invented to solve imbalance problems in software defect detection. Our results show that comparing to the boosting methods, Cost-Sensitive Forest particularly classifies failed banks more accurately. Thus, we suggest using the Cost-Sensitive Forest when predicting bank failures with imbalanced datasets.


Author(s):  
Virendra Tiwari ◽  
Balendra Garg ◽  
Uday Prakash Sharma

The machine learning algorithms are capable of managing multi-dimensional data under the dynamic environment. Despite its so many vital features, there are some challenges to overcome. The machine learning algorithms still requires some additional mechanisms or procedures for predicting a large number of new classes with managing privacy. The deficiencies show the reliable use of a machine learning algorithm relies on human experts because raw data may complicate the learning process which may generate inaccurate results. So the interpretation of outcomes with expertise in machine learning mechanisms is a significant challenge in the machine learning algorithm. The machine learning technique suffers from the issue of high dimensionality, adaptability, distributed computing, scalability, the streaming data, and the duplicity. The main issue of the machine learning algorithm is found its vulnerability to manage errors. Furthermore, machine learning techniques are also found to lack variability. This paper studies how can be reduced the computational complexity of machine learning algorithms by finding how to make predictions using an improved algorithm.


Author(s):  
Jakub Gęca

The consequences of failures and unscheduled maintenance are the reasons why engineers have been trying to increase the reliability of industrial equipment for years. In modern solutions, predictive maintenance is a frequently used method. It allows to forecast failures and alert about their possibility. This paper presents a summary of the machine learning algorithms that can be used in predictive maintenance and comparison of their performance. The analysis was made on the basis of data set from Microsoft Azure AI Gallery. The paper presents a comprehensive approach to the issue including feature engineering, preprocessing, dimensionality reduction techniques, as well as tuning of model parameters in order to obtain the highest possible performance. The conducted research allowed to conclude that in the analysed case , the best algorithm achieved 99.92% accuracy out of over 122 thousand test data records. In conclusion, predictive maintenance based on machine learning represents the future of machine reliability in industry.


2021 ◽  
Author(s):  
Vallet Laurent ◽  
Gutarov Pavel ◽  
Chevallier Bertrand ◽  
Converset Julien ◽  
Paterson Graeme ◽  
...  

Abstract In the current economic environment, delivering wells on time and on budget is paramount. Well construction is a significant cost of any field development and it is more important than ever to minimize these costs and to avoid unnecessary lost time and non-productive time. Invisible lost time and non-productive time can represent as much as 40% of the cost of well construction and can lead to more severe issues such as delaying first oil, losing the well or environmental impact. There has been much work developing systems to optimize well construction, but the industry still fails to routinely detect and avoid problematic events such as stuck pipe, kicks, losses and washouts. Standardizing drilling practice can help also to improve the efficiency, this practice has shown a 30% cost reduction through repetitive and systematic practices, automation becomes the key process to realize it and Machine Learning introduced by new technologies is the key to achieve it. Drilling data analysis is key to understanding reasons for bad performances and detecting at an early stage potential downhole events. It can be done efficiently to provide to the user tools to look at the well construction process in its whole instead of looking at the last few hours as it is done at the rig site. In order to analyze the drilling data, it is necessary to have access to reliable data in Real-Time to compare with a data model considering the context (BHA, fluids, well geometry). Well planning, including multi-well offset analysis of risks, drilling processes and geology enables a user to look at the full well construction process and define levels of automation. This paper applies machine learning to a post multi-well analysis of a deepwater field development known for its drilling challenges. Minimizing the human input through automation allowed us to compare offset wells and to define the root cause for non-productive time. In our case study an increase of the pressure while drilling should have led to immediate mitigation measures to avoid a wiper trip. This paper presents techniques used to systematize surface data analysis and a workflow to identify at an early stage a near pack off which was spotted in an automatic way. The application of this process during operations could have achieved a 10%-time reduction of the section 12 ¼’’.


Sign in / Sign up

Export Citation Format

Share Document