Automated Painting Survey, Degree of Rusting Classification, and Mapping with Machine Learning

2021 ◽  
Author(s):  
Eric Ferguson ◽  
Toby Dunne ◽  
Lloyd Windrim ◽  
Suchet Bargoti ◽  
Nasir Ahsan ◽  
...  

Abstract Objective Continuous fabric maintenance (FM) is crucial for uninterrupted operations on offshore oil and gas platforms. A primary FM goal is managing the onset of coating degradation across the surfaces of offshore platforms. Physical field inspection programs are required to target timely detection and grading of coating conditions. These processes are costly, time-consuming, labour-intensive, and must be conducted on-site. Moreover, the inspection findings are subjective and provide incomplete asset coverage, leading to increased risk of unplanned shutdowns. Risk reduction and increased FM efficiency is achieved using machine learning and computer vision algorithms to analyze full-facility imagery for coating degradation and subsequent ‘degree-of-rusting’ classification of equipment to industry inspection standards. Methods, Procedures, Process Inspection data is collected for the entirety of an offshore facility using a terrestrial scanner. Coating degradation is detected across the facility using machine learning and computer vision algorithms. Additionally, the inspection data is tagged with unique piping line numbers per design, fixed equipment tags, or unique asset identification numbers. Computer vision algorithms and the detected coating degradation are subsequently used as input to determine the ‘degree-of-rusting’ throughout the facility, and coating condition status is tagged to specific piping or equipment. The degree-of-rusting condition rating follows common industry standards used by inspection engineers (e.g., ISO 4628-3, ASTM D610-01, or European Rust Scale). Results, Observations, Conclusions Atmospheric corrosion is the number one asset integrity threat to offshore platforms. Utilizing this automatic coating condition technology, a comprehensive and objective analysis of a facility's health is provided. Coating condition results are overlaid on inspection imagery for rapid visualisation. Coating condition is associated with individual instances of equipment. This allows for rapid filtering of equipment by coating condition severity, process type, equipment type, etc. Fabric maintenance efficiencies are realized by targeting decks, blocks, or areas with the highest aggregate coating degradation (on process equipment or structurally, as selected by the user) and concentrating remediation efforts on at-risk equipment. With the automated classification of degree-of-rusting, mitigation strategies that extend the life of the asset can be optimised, resulting in efficiency gains and cost savings for the facility. Conventional manual inspections and reporting of coating conditions has low objectivity and increased risk and cost when compared to the proposed method. Novel/Additive Information Drawing on machine learning and computer vision techniques, this work proposes a novel workflow for automatically identifying the degree-of-rusting on assets using industry inspection standards. This contributes directly to greater risk awareness, targeted remediation strategies, improving the overall efficiency of the asset management process, and reducing the down-time of offshore facilities.

Measurement ◽  
2015 ◽  
Vol 60 ◽  
pp. 222-230 ◽  
Author(s):  
Rajalingappaa Shanmugamani ◽  
Mohammad Sadique ◽  
B. Ramamoorthy

Author(s):  
Denis Sato ◽  
Adroaldo José Zanella ◽  
Ernane Xavier Costa

Vehicle-animal collisions represent a serious problem in roadway infrastructure. To avoid these roadway collisions, different mitigation systems have been applied in various regions of the world. In this article, a system for detecting animals on highways is presented using computer vision and machine learning algorithms. The models were trained to classify two groups of animals: capybaras and donkeys. Two variants of the convolutional neural network called Yolo (You only look once) were used, Yolov4 and Yolov4-tiny (a lighter version of the network). The training was carried out using pre-trained models. Detection tests were performed on 147 images. The accuracy results obtained were 84.87% and 79.87% for Yolov4 and Yolov4-tiny, respectively. The proposed system has the potential to improve road safety by reducing or preventing accidents with animals.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2953 ◽  
Author(s):  
Jessica Fernandes Lopes ◽  
Leniza Ludwig ◽  
Douglas Fernandes Barbin ◽  
Maria Victória Eiras Grossmann ◽  
Sylvio Barbon

Imaging sensors are largely employed in the food processing industry for quality control. Flour from malting barley varieties is a valuable ingredient in the food industry, but its use is restricted due to quality aspects such as color variations and the presence of husk fragments. On the other hand, naked varieties present superior quality with better visual appearance and nutritional composition for human consumption. Computer Vision Systems (CVS) can provide an automatic and precise classification of samples, but identification of grain and flour characteristics require more specialized methods. In this paper, we propose CVS combined with the Spatial Pyramid Partition ensemble (SPPe) technique to distinguish between naked and malting types of twenty-two flour varieties using image features and machine learning. SPPe leverages the analysis of patterns from different spatial regions, providing more reliable classification. Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), J48 decision tree, and Random Forest (RF) were compared for samples’ classification. Machine learning algorithms embedded in the CVS were induced based on 55 image features. The results ranged from 75.00% (k-NN) to 100.00% (J48) accuracy, showing that sample assessment by CVS with SPPe was highly accurate, representing a potential technique for automatic barley flour classification.


Author(s):  
Seyed Omid Mohammadi ◽  
Ahmad Kalhor

The rapid progress of computer vision, machine learning, and artificial intelligence combined with the current growing urge for online shopping systems opened an excellent opportunity for the fashion industry. As a result, many studies worldwide are dedicated to modern fashion-related applications such as virtual try-on and fashion synthesis. However, the accelerated evolution speed of the field makes it hard to track these many research branches in a structured framework. This paper presents an overview of the matter, categorizing 110 relevant articles into multiple sub-categories and varieties of these tasks. An easy-to-use yet informative tabular format is used for this purpose. Such hierarchical application-based multi-label classification of studies increases the visibility of current research, promotes the field, provides research directions, and facilitates access to related studies.


Author(s):  
O. Teslenko ◽  
A. Pashko

The article discuses approaches to solving the problem of determining the activity of the driver from the cameras installed in the cargiven the actve development of intelligent driver asistance systems in recent years. The aricle provides an overview of the main problems that arise for the driver while driving Main advances in autonomous drving are presented and the classification of types of autonomous vehicles is provided . Next, the methods of solving the identified problems are described. The main part of the article focuses on solving the problem of determining the state of the driver during driving. Reasons for usage of computer vision and machine learning approaches for soving this task are described. The basic paradigms of the solution of his problem - classification of images, classification of a video stream, detection of the basic points of a body of the driver on the image from the camera installed inside a car are investigated. Main ideas of every method are described. The approaches are evaluated with identification of main advantages and drawbacks of the presented methods.


2021 ◽  
pp. 1143-1146
Author(s):  
A.V. Lysenko ◽  
◽  
◽  
M.S. Oznobikhin ◽  
E.A. Kireev ◽  
...  

Abstract. This study discusses the problem of phytoplankton classification using computer vision methods and convolutional neural networks. We created a system for automatic object recognition consisting of two parts: analysis and primary processing of phytoplankton images and development of the neural network based on the obtained information about the images. We developed software that can detect particular objects in images from a light microscope. We trained a convolutional neural network in transfer learning and determined optimal parameters of this neural network and the optimal size of using dataset. To increase accuracy for these groups of classes, we created three neural networks with the same structure. The obtained accuracy in the classification of Baikal phytoplankton by these neural networks was up to 80%.


Sign in / Sign up

Export Citation Format

Share Document