scholarly journals Desenvolvimento de um software para calibração de trenas utilizando visão computacional

Author(s):  
Matheus Santana Carvalho ◽  
Benjamin Grando Moreira ◽  
Sueli Fischer Beckert

Metrology is responsible for the studies of aspects that involve themeasurements application, being a common area to engineering inthe search for continuous improvement and quality in the processand products. To approximate the machine processing, factory floor,and development of new applications, continuous technologicaldevelopment is demanded from the industrial sector and modernizationof its process. In this scenario, Metrology 4.0 uses technologiesin the traditional process ensuring the data quality, reliability, andsupports decision in real-time. For the innovation of the traditionalmodels of calibration, this paper introduces a software developmentto realize the comparation of measuring tape with a standard, usingComputer Vision, and compares the results of this process with thetraditional calibration methods.

2012 ◽  
Vol 27 (03) ◽  
pp. 383-392 ◽  
Author(s):  
Andreas Hartmann ◽  
Oleg Akimov ◽  
Stephen Morris ◽  
Christian Fulda

2021 ◽  
Author(s):  
Temirlan Zhekenov ◽  
Artem Nechaev ◽  
Kamilla Chettykbayeva ◽  
Alexey Zinovyev ◽  
German Sardarov ◽  
...  

SUMMARY Researchers base their analysis on basic drilling parameters obtained during mud logging and demonstrate impressive results. However, due to limitations imposed by data quality often present during drilling, those solutions often tend to lose their stability and high levels of predictivity. In this work, the concept of hybrid modeling was introduced which allows to integrate the analytical correlations with algorithms of machine learning for obtaining stable solutions consistent from one data set to another.


2018 ◽  
Vol 210 ◽  
pp. 01006
Author(s):  
Miguel G. Molina ◽  
Priscila E. Garzón ◽  
Carolina J. Molina ◽  
Juan X. Nicola

With the uprising of Internet of Things (IoT) networks, new applications have taken advantage of this new concept. Having all devices and all people connected 24/7 have several advantages in a variated amount of disciplines. One of them is medicine and the e-health concept. The possibility of having a real time lecture of the vital signs of people can prevent a live threat situation. This paper describes the realization of a device capable of measuring the heart rate of a person and checking for abnormalities that may negatively affect the patient’s well-being. This project will make use of electronic devices known as microcontrollers, specifically from the Arduino family, enabling us to capture data, and, with the help of a network card and a RJ-45 cable, transfer it to a PC and visualize the heart rate in real time over its assigned IP address.


2014 ◽  
Vol 971-973 ◽  
pp. 1481-1484
Author(s):  
Ke He Wu ◽  
Long Chen ◽  
Yi Li

In order to ensure safe and stable running of applications, this paper analyses the limitation of traditional process-monitoring methods, and then designs a new real-time process monitor method based on Mandatory Running Control (MRC) technology. This method not only can monitor the processes, but also can control them from system kernel level to improve the reliability and safety of applications, so as to ensure the security and stability of information system.


Energies ◽  
2020 ◽  
Vol 13 (21) ◽  
pp. 5576
Author(s):  
Zhenhua Li ◽  
Yangang Zheng ◽  
Ahmed Abu-Siada ◽  
Mengyao Lu ◽  
Hongbin Li ◽  
...  

The electronic voltage transformer (EVT) has received much attention with the recent global trend to establish smart grids and digital substations. One of the main issues of the EVT is the deterioration of its performance with long-term operation which affects the control and protection systems it is employed for and hence the overall reliability of the power grids. This calls for the essential need for a reliable technique to regularly assess the accuracy of operating EVT in real-time. Unfortunately, traditional calibration methods cannot detect the incipient EVT performance change in real-time. As such, this paper presents a new online method to evaluate the accuracy of the EVT. In this regard, the Q-statistic is calculated based on the recursive principal components analysis (RPCA) using the output data of EVT to map up the changes of metering error on the electric–physics relationship. By employing the output data of the EVT along with the power grid characteristics, the performance of the EVT is evaluated without the need for a standard transformer, as per the current industry practice. Results show that the proposed method can assess the EVT with a 0.2 accuracy class.


2021 ◽  
Author(s):  
Aurore Lafond ◽  
Maurice Ringer ◽  
Florian Le Blay ◽  
Jiaxu Liu ◽  
Ekaterina Millan ◽  
...  

Abstract Abnormal surface pressure is typically the first indicator of a number of problematic events, including kicks, losses, washouts and stuck pipe. These events account for 60–70% of all drilling-related nonproductive time, so their early and accurate detection has the potential to save the industry billions of dollars. Detecting these events today requires an expert user watching multiple curves, which can be costly, and subject to human errors. The solution presented in this paper is aiming at augmenting traditional models with new machine learning techniques, which enable to detect these events automatically and help the monitoring of the drilling well. Today’s real-time monitoring systems employ complex physical models to estimate surface standpipe pressure while drilling. These require many inputs and are difficult to calibrate. Machine learning is an alternative method to predict pump pressure, but this alone needs significant labelled training data, which is often lacking in the drilling world. The new system combines these approaches: a machine learning framework is used to enable automated learning while the physical models work to compensate any gaps in the training data. The system uses only standard surface measurements, is fully automated, and is continuously retrained while drilling to ensure the most accurate pressure prediction. In addition, a stochastic (Bayesian) machine learning technique is used, which enables not only a prediction of the pressure, but also the uncertainty and confidence of this prediction. Last, the new system includes a data quality control workflow. It discards periods of low data quality for the pressure anomaly detection and enables to have a smarter real-time events analysis. The new system has been tested on historical wells using a new test and validation framework. The framework runs the system automatically on large volumes of both historical and simulated data, to enable cross-referencing the results with observations. In this paper, we show the results of the automated test framework as well as the capabilities of the new system in two specific case studies, one on land and another offshore. Moreover, large scale statistics enlighten the reliability and the efficiency of this new detection workflow. The new system builds on the trend in our industry to better capture and utilize digital data for optimizing drilling.


2021 ◽  
Author(s):  
S. H. Al Gharbi ◽  
A. A. Al-Majed ◽  
A. Abdulraheem ◽  
S. Patil ◽  
S. M. Elkatatny

Abstract Due to high demand for energy, oil and gas companies started to drill wells in remote areas and unconventional environments. This raised the complexity of drilling operations, which were already challenging and complex. To adapt, drilling companies expanded their use of the real-time operation center (RTOC) concept, in which real-time drilling data are transmitted from remote sites to companies’ headquarters. In RTOC, groups of subject matter experts monitor the drilling live and provide real-time advice to improve operations. With the increase of drilling operations, processing the volume of generated data is beyond a human's capability, limiting the RTOC impact on certain components of drilling operations. To overcome this limitation, artificial intelligence and machine learning (AI/ML) technologies were introduced to monitor and analyze the real-time drilling data, discover hidden patterns, and provide fast decision-support responses. AI/ML technologies are data-driven technologies, and their quality relies on the quality of the input data: if the quality of the input data is good, the generated output will be good; if not, the generated output will be bad. Unfortunately, due to the harsh environments of drilling sites and the transmission setups, not all of the drilling data is good, which negatively affects the AI/ML results. The objective of this paper is to utilize AI/ML technologies to improve the quality of real-time drilling data. The paper fed a large real-time drilling dataset, consisting of over 150,000 raw data points, into Artificial Neural Network (ANN), Support Vector Machine (SVM) and Decision Tree (DT) models. The models were trained on the valid and not-valid datapoints. The confusion matrix was used to evaluate the different AI/ML models including different internal architectures. Despite the slowness of ANN, it achieved the best result with an accuracy of 78%, compared to 73% and 41% for DT and SVM, respectively. The paper concludes by presenting a process for using AI technology to improve real-time drilling data quality. To the author's knowledge based on literature in the public domain, this paper is one of the first to compare the use of multiple AI/ML techniques for quality improvement of real-time drilling data. The paper provides a guide for improving the quality of real-time drilling data.


2016 ◽  
Author(s):  
Alfred Enyekwe ◽  
Osahon Urubusi ◽  
Raufu Yekini ◽  
Iorkam Azoom ◽  
Oloruntoba Isehunwa

ABSTRACT Significant emphasis on data quality is placed on real-time drilling data for the optimization of drilling operations and on logging data for quality lithological and petrophysical description of a field. This is evidenced by huge sums spent on real time MWD/LWD tools, broadband services, wireline logging tools, etc. However, a lot more needs to be done to harness quality data for future workover and or abandonment operations where data being relied on is data that must have been entered decades ago and costs and time spent are critically linked to already known and certified information. In some cases, data relied on has been migrated across different data management platforms, during which relevant data might have been lost, mis-interpreted or mis-placed. Another common cause of wrong data is improperly documented well intervention operations which have been done in such a short time, that there is no pressure to document the operation properly. This leads to confusion over simple issues such as what depth a plug was set, or what junk was left in hole. The relative lack of emphasis on this type of data quality has led to high costs of workover and abandonment operations. In some cases, well control incidents and process safety incidents have arisen. This paper looks at over 20 workover operations carried out in a span of 10 years. An analysis is done on the wells’ original timeline of operation. The data management system is generally analyzed and a categorization of issues experienced during the workover operations is outlined. Bottlenecks in data management are defined and solutions currently being implemented to manage these problems are listed as recommended good practices.


Author(s):  
David J. Yates ◽  
Jennifer Xu

This research is motivated by data mining for wireless sensor network applications. The authors consider applications where data is acquired in real-time, and thus data mining is performed on live streams of data rather than on stored databases. One challenge in supporting such applications is that sensor node power is a precious resource that needs to be managed as such. To conserve energy in the sensor field, the authors propose and evaluate several approaches to acquiring, and then caching data in a sensor field data server. The authors show that for true real-time applications, for which response time dictates data quality, policies that emulate cache hits by computing and returning approximate values for sensor data yield a simultaneous quality improvement and cost saving. This “win-win” is because when data acquisition response time is sufficiently important, the decrease in resource consumption and increase in data quality achieved by using approximate values outweighs the negative impact on data accuracy due to the approximation. In contrast, when data accuracy drives quality, a linear trade-off between resource consumption and data accuracy emerges. The authors then identify caching and lookup policies for which the sensor field query rate is bounded when servicing an arbitrary workload of user queries. This upper bound is achieved by having multiple user queries share the cost of a sensor field query. Finally, the authors discuss the challenges facing sensor network data mining applications in terms of data collection, warehousing, and mining techniques.


Sign in / Sign up

Export Citation Format

Share Document