Satellite Fields Digitalization & ALS Optimization with EDGE & Advance Analytics Application

2021 ◽  
Author(s):  
Nitin Johri ◽  
Nimish Pandey ◽  
Sanket Kadam ◽  
Sanjeev Vermani ◽  
Shubham Agarwal ◽  
...  

Abstract Data monitoring in remote satellite field without any DOF platform is a challenging task but critical for ALS monitoring and optimization. In SRP wells the VFD data collection is important for analysis of downhole pump behavior and system health. SRP maintenance crew collects data from VFDs daily, but it is time consuming and can target only few wells in a day. The steps from requirement of dyna to final decision taken for ALS optimization are mobilizing team, permits approvals, download data, e-mail dynacards, dyna visualization, final decision. The problems with above process were: - Insufficient and discrete data for any post-failure analysis or ALS-optimization Minimal data to investigate the pre failure events The lack of real time monitoring was resulting in well downtime and associated production loss. The combination of IOT, Cloud Computing and Machine learning was implemented to shift from the reactive to proactive approach which helped in ALS Optimization and reduced production loss. The data was transmitted to a Cloud server and further it was transmitted to web-based app. Since thousands of Dynacards are generated in a day, hence it requires automated classification using computer driven pattern recognition techniques. The real time data is used for analysis involving basic statistic and Machine learning algorithms. The critical pump signatures were identified using machine learning libraries and email is generated for immediate action. Several informative dashboards were developed which provide quick analysis of ALS performance. The types of dashboard are as below Well Operational Status Dynacards Interpretation module SRP parameters visualization Machine Learning model calibration module Pump Performance Statistics After collection of enough data and creation of analytical dashboards on the three wells using domain knowledge the gained insights were used for ALS optimization. To keep the model in an evergreen high-confidence prediction state, inputs from domain experts are often required. After regular fine-tuning the prediction accuracy of the ML model increased to 80-85 %. In addition, system was made flexible so that a new algorithm can be deployed when required. Smart Alarms were generated involving statistic and Machine Learning by the system which gives alerts by e-mail if an abnormal behavior or erratic dynacards were identified. This helped in reduction of well downtime in some events which were treated instinctively before. The integration of domain knowledge and digitalization enables an engineer to take informed and effective decisions. The techniques discussed above can be implemented in marginal fields where DOF implementation is logistically and economically challenged. EDGE along with advanced analytics will gain more technological advances and can be used in other potential domains as well in near future.

2021 ◽  
Author(s):  
Rodrigo Chamusca Machado ◽  
Fabbio Leite ◽  
Cristiano Xavier ◽  
Alberto Albuquerque ◽  
Samuel Lima ◽  
...  

Objectives/Scope This paper presents how a brazilian Drilling Contractor and a startup built a partnership to optimize the maintenance window of subsea blowout preventers (BOPs) using condition-based maintenance (CBM). It showcases examples of insights about the operational conditions of its components, which were obtained by applying machine learning techniques in real time and historic, structured or unstructured, data. Methods, Procedures, Process From unstructured and structured historical data, which are generated daily from BOP operations, a knowledge bank was built and used to develop normal functioning models. This has been possible even without real-time data, as it has been tested with large sets of operational data collected from event log text files. Software retrieves the data from Event Loggers and creates structured database, comprising analog variables, warnings, alarms and system information. Using machine learning algorithms, the historical data is then used to develop normal behavior modeling for the target components. Thereby, it is possible to use the event logger or real time data to identify abnormal operation moments and detect failure patterns. Critical situations are immediately transmitted to the RTOC (Real-time Operations Center) and management team, while less critical alerts are recorded in the system for further investigation. Results, Observations, Conclusions During the implementation period, Drilling Contractor was able to identify a BOP failure using the detection algorithms and used 100% of the information generated by the system and reports to efficiently plan for equipment maintenance. The system has also been intensively used for incident investigation, helping to identify root causes through data analytics and retro-feeding the machine learning algorithms for future automated failure predictions. This development is expected to significantly reduce the risk of BOP retrieval during the operation for corrective maintenance, increased staff efficiency in maintenance activities, reducing the risk of downtime and improving the scope of maintenance during operational windows, and finally reduction in the cost of spare parts replacementduring maintenance without impact on operational safety. Novel/Additive Information For the near future, the plan is to integrate the system with the Computerized Maintenance Management System (CMMS), checking for historical maintenance, overdue maintenance, certifications, at the same place and time that we are getting real-time operational data and insights. Using real-time data as input, we expect to expand the failure prediction application for other BOP parts (such as regulators, shuttle valves, SPMs (Submounted Plate valves), etc) and increase the applicability for other critical equipment on the rig.


2021 ◽  
Author(s):  
Jasleen Kaur ◽  
Shruti Kapoor ◽  
Maninder Singh ◽  
Parvinderjit Singh Kohli ◽  
Urvinder Singh ◽  
...  

BACKGROUND Infectious diseases are the major cause of mortality across the globe. Tuberculosis is one such infectious disease which is in the top 10 deaths causing diseases in developing as well as developed countries. The biosensors have emerged as a promising approach to attain the early detection of the pathogenic infection with accuracy and precision. However, the main challenge with biosensors is real time data monitoring preferentially reversible and label free measurements of certain analytes. Integration of biosensor and Artificial Intelligence (AI) approach would enable better acquisition of patient’s data in real time manner enabling automatic detection and monitoring of Mycobacterium tuberculosis (M.tb.) at an early stage. Here we propose a biosensor based smart handheld device that can be designed for automatic detection and real time monitoring of M.tb from varied analytic sources including DNA, proteins and biochemical metabolites. The collected data would be continuously transferred to the connected cloud integrated with AI based clinical decision support systems (CDSS) which may consist of the machine learning based analysis model useful in studying the patterns of disease infestation, progression, early detection and treatment. The proposed system may get deployed in different collaborating centres for validation and collecting the real time data. OBJECTIVE To propose a biosensor based smart handheld device that can be designed for automatic detection and real time monitoring of M.tb from varied analytic sources including DNA, proteins and biochemical metabolites. METHODS The Major challenges for control and early detection of the Mycobacterium tuberculosis were studied based upon the literature survey. Based upon the observed challenges, the biosensor based smart handheld device has been proposed for automatic detection and real time monitoring of M.tb from varied analytic sources including DNA, proteins and biochemical metabolites. RESULTS In this viewpoint, we propose an application based novel approach of combining AI based machine learning algorithms on the real time data collected with the use of biosensor technology which can serve as a point of care system for early diagnosis of the disease which would be low cost, simple, responsive, measurable, can diagnose and distinguish between active and passive cases, include single patient visits, cause considerable inconvenience, can evaluate the cough sample, require minimum material aid and experienced staff, and is user-friendly. CONCLUSIONS In this viewpoint, we propose an application based novel approach of combining AI based machine learning algorithms on the real time data collected with the use of biosensor technology which can serve as a point of care system for early diagnosis of the disease which would be low cost, simple, responsive, measurable, can diagnose and distinguish between active and passive cases, include single patient visits, cause considerable inconvenience, can evaluate the cough sample, require minimum material aid and experienced staff, and is user-friendly.


2021 ◽  
Author(s):  
Ardiansyah Negara ◽  
Arturo Magana-Mora ◽  
Khaqan Khan ◽  
Johannes Vossen ◽  
Guodong David Zhan ◽  
...  

Abstract This study presents a data-driven approach using machine learning algorithms to provide predicted analogues in the absence of acoustic logs, especially while drilling. Acoustic logs are commonly used to derive rock mechanical properties; however, these data are not always available. Well logging data (wireline/logging while drilling - LWD), such as gamma ray, density, neutron porosity, and resistivity, are used as input parameters to develop the data-driven rock mechanical models. In addition to the logging data, real-time drilling data (i.e., weight-on-bit, rotation speed, torque, rate of penetration, flowrate, and standpipe pressure) are used to derive the model. In the data preprocessing stage, we labeled drilling and well logging data based on formation tops in the drilling plan and performed data cleansing to remove outliers. A set of field data from different wells across the same formation is used to build and train the predictive models. We computed feature importance to rank the data based on the relevance to predict acoustic logs and applied feature selection techniques to remove redundant features that may unnecessarily require a more complex model. An additional feature, mechanical specific energy, is also generated from drilling real-time data to improve the prediction accuracy. A number of scenarios showing a comparison of different predictive models were studied, and the results demonstrated that adding drilling data and/or feature engineering into the model could improve the accuracy of the models.


2020 ◽  
Author(s):  
Sohini Sengupta ◽  
Sareeta Mugde

BACKGROUND India reported its first Covid-19 case on 30th Jan 2020 with no practically no significant rise noticed in the number of cases in the month of February but March2020 onwards there has been a huge escalation as has been the case with like many other countries the world over. This research paper analyses COVID -19 data initially at a global level and then drills down to the scenario obtained in India. Data is gathered from multiple data sources- several authentic government websites. Variables such as gender, geographical location, age etc. have been represented using Python and Data Visualization techniques. Getting insights on Trend pattern and time series analysis will bring more clarity to the current scenario as analysis is totally on real-time data(till 19th June). Time Series Analysis and other pattern-recognition techniques are deployed to bring more clarity to the current scenario as analysis is totally based on real-time data(till 19th June,2020) Finally we will use some machine learning algorithms and perform predictive analytics for the near future scenario. We are using a sigmoid model to give an estimate of the day on which we can expect the number of active cases to reach its peak and also when the curve will start to flatten. Strength of Sigmoid model lies in providing a count of date –this is unique feature of analysis in this paper. We are also using certain feature engineering techniques to transfer data into logarithmic scale for better comparison removing any data extremities or outliers. Certain feature engineering techniques have been used to transfer data into logarithmic scale as is affords better comparison removing any data extremities or outliers. Based on the predictions of the short-term interval, our model can be tuned to forecast long time intervals. Needless to mention there are a lot of factors responsible for the cases to come in the upcoming days. One factor being extent of adherence to the rules and restriction imposed by the Government by the citizens of the country. OBJECTIVE Prediction of the number of positive covid cases in the next few months . METHODS Machine Learning Model - Clustering Sigmoid Model RESULTS The model predicts maximum active cases at 258846. The curve flattens by day 154 i.e. 25th September and after that the curve goes down and the number of active cases eventually will decrease. CONCLUSIONS There are a lot of research works going on with respect to vaccines, economic dealings, precautions and reduction of Covid-19 cases. However currently we are at a mid-Covid situation. India along with many other countries are still witnessing upsurge in the number of cases at alarming rates on a daily basis. We have not yet reached the peak. Therefore cuff learning and downward growth are also yet to happen. Each day comes out with fresh information and large amount of data. Also there are many other predictive models using machine learning that beyond the scope of this paper. However at the end of the day it is only the precautionary measures we as responsible citizens can take that will help to flatten the curve. We can all join hands together and maintain all rules and regulations strictly. Maintaining social distancing, taking the lockdown seriously is the only key. This study is based on real time data and will be useful for certain key stakeholders like government officials, healthcare workers to prepare a combat plan along with stringent measures. Also the study will help mathematicians and statisticians to predict outbreak numbers more accurately.


Telecom IT ◽  
2019 ◽  
Vol 7 (3) ◽  
pp. 50-55
Author(s):  
D. Saharov ◽  
D. Kozlov

The article deals with the СoAP Protocol that regulates the transmission and reception of information traf-fic by terminal devices in IoT networks. The article describes a model for detecting abnormal traffic in 5G/IoT networks using machine learning algorithms, as well as the main methods for solving this prob-lem. The relevance of the article is due to the wide spread of the Internet of things and the upcoming update of mobile networks to the 5g generation.


2020 ◽  
Vol 98 (Supplement_4) ◽  
pp. 126-127
Author(s):  
Lucas S Lopes ◽  
Christine F Baes ◽  
Dan Tulpan ◽  
Luis Artur Loyola Chardulo ◽  
Otavio Machado Neto ◽  
...  

Abstract The aim of this project is to compare some of the state-of-the-art machine learning algorithms on the classification of steers finished in feedlots based on performance, carcass and meat quality traits. The precise classification of animals allows for fast, real-time decision making in animal food industry, such as culling or retention of herd animals. Beef production presents high variability in its numerous carcass and beef quality traits. Machine learning algorithms and software provide an opportunity to evaluate the interactions between traits to better classify animals. Four different treatment levels of wet distiller’s grain were applied to 97 Angus-Nellore animals and used as features for the classification problem. The C4.5 decision tree, Naïve Bayes (NB), Random Forest (RF) and Multilayer Perceptron (MLP) Artificial Neural Network algorithms were used to predict and classify the animals based on recorded traits measurements, which include initial and final weights, sheer force and meat color. The top performing classifier was the C4.5 decision tree algorithm with a classification accuracy of 96.90%, while the RF, the MLP and NB classifiers had accuracies of 55.67%, 39.17% and 29.89% respectively. We observed that the final decision tree model constructed with C4.5 selected only the dry matter intake (DMI) feature as a differentiator. When DMI was removed, no other feature or combination of features was sufficiently strong to provide good prediction accuracies for any of the classifiers. We plan to investigate in a follow-up study on a significantly larger sample size, the reasons behind DMI being a more relevant parameter than the other measurements.


Author(s):  
Atheer Alahmed ◽  
Amal Alrasheedi ◽  
Maha Alharbi ◽  
Norah Alrebdi ◽  
Marwan Aleasa ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document