scholarly journals Visual inspection for transformer insulation defects by a patrol robot fish based on deep learning

Author(s):  
Hongxin Ji ◽  
Xiwang Cui ◽  
Weiyan Ren ◽  
Liqing Liu ◽  
Wei Wang
Author(s):  
Xuefeng Zhao ◽  
Shengyuan Li ◽  
Hongguo Su ◽  
Lei Zhou ◽  
Kenneth J. Loh

Bridge management and maintenance work is an important part for the assessment the health state of bridge. The conventional management and maintenance work mainly relied on experienced engineering staffs by visual inspection and filling in survey forms. However, the human-based visual inspection is a difficult and time-consuming task and its detection results significantly rely on subjective judgement of human inspectors. To address the drawbacks of human-based visual inspection method, this paper proposes an image-based comprehensive maintenance and inspection method for bridges using deep learning. To classify the types of bridges, a convolutional neural network (CNN) classifier established by fine-turning the AlexNet is trained, validated and tested using 3832 images with three types of bridges (arch, suspension and cable-stayed bridge). For the recognition of bridge components (tower and deck of bridges), a Faster Region-based Convolutional Neural Network (Faster R-CNN) based on modified ZF-net is trained, validated and tested by utilizing 600 bridge images. To implement the strategy of a sliding window technique for the crack detection, another CNN from fine-turning the GoogLeNet is trained, validated and tested by employing a databank with cropping 1455 raw concrete images into 60000 intact and cracked images. The performance of the trained CNNs and Faster R-CNN is tested on some new images which are not used for training and validation processes. The test results substantiate the proposed method can indeed recognize the types and components and detect cracks for a bridges.


2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Balakrishnan Ramalingam ◽  
Vega-Heredia Manuel ◽  
Mohan Rajesh Elara ◽  
Ayyalusami Vengadesh ◽  
Anirudh Krishna Lakshmanan ◽  
...  

Aircraft surface inspection includes detecting surface defects caused by corrosion and cracks and stains from the oil spill, grease, dirt sediments, etc. In the conventional aircraft surface inspection process, human visual inspection is performed which is time-consuming and inefficient whereas robots with onboard vision systems can inspect the aircraft skin safely, quickly, and accurately. This work proposes an aircraft surface defect and stain detection model using a reconfigurable climbing robot and an enhanced deep learning algorithm. A reconfigurable, teleoperated robot, named as “Kiropter,” is designed to capture the aircraft surface images with an onboard RGB camera. An enhanced SSD MobileNet framework is proposed for stain and defect detection from these images. A Self-filtering-based periodic pattern detection filter has been included in the SSD MobileNet deep learning framework to achieve the enhanced detection of the stains and defects on the aircraft skin images. The model has been tested with real aircraft surface images acquired from a Boeing 737 and a compact aircraft’s surface using the teleoperated robot. The experimental results prove that the enhanced SSD MobileNet framework achieves improved detection accuracy of aircraft surface defects and stains as compared to the conventional models.


2020 ◽  
Vol 10 (21) ◽  
pp. 7755 ◽  
Author(s):  
Liangliang Chen ◽  
Ning Yan ◽  
Hongmai Yang ◽  
Linlin Zhu ◽  
Zongwei Zheng ◽  
...  

Deep learning technology is outstanding in visual inspection. However, in actual industrial production, the use of deep learning technology for visual inspection requires a large number of training data with different acquisition scenarios. At present, the acquisition of such datasets is very time-consuming and labor-intensive, which limits the further development of deep learning in industrial production. To solve the problem of image data acquisition difficulty in industrial production with deep learning, this paper proposes a data augmentation method for deep learning based on multi-degree of freedom (DOF) automatic image acquisition and designs a multi-DOF automatic image acquisition system for deep learning. By designing random acquisition angles and random illumination conditions, different acquisition scenes in actual production are simulated. By optimizing the image acquisition path, a large number of accurate data can be obtained in a short time. In order to verify the performance of the dataset collected by the system, the fabric is selected as the research object after the system is built, and the dataset comparison experiment is carried out. The dataset comparison experiment confirms that the dataset obtained by the system is rich and close to the real application environment, which solves the problem of dataset insufficient in the application process of deep learning to a certain extent.


2016 ◽  
Vol 12 (S325) ◽  
pp. 205-208
Author(s):  
Fernando Caro ◽  
Marc Huertas-Company ◽  
Guillermo Cabrera

AbstractIn order to understand how galaxies form and evolve, the measurement of the parameters related to their morphologies and also to the way they interact is one of the most relevant requirements. Due to the huge amount of data that is generated by surveys, the morphological and interaction analysis of galaxies can no longer rely on visual inspection. For dealing with such issue, new approaches based on machine learning techniques have been proposed in the last years with the aim of automating the classification process. We tested Deep Learning using images of galaxies obtained from CANDELS to study the accuracy achieved by this tool considering two different frameworks. In the first, galaxies were classified in terms of their shapes considering five morphological categories, while in the second, the way in which galaxies interact was employed for defining other five categories. The results achieved in both cases are compared and discussed.


2021 ◽  
Vol 50 (1) ◽  
pp. E13
Author(s):  
Victor E. Staartjes ◽  
Peter R. Seevinck ◽  
W. Peter Vandertop ◽  
Marijn van Stralen ◽  
Marc L. Schröder

OBJECTIVEComputed tomography scanning of the lumbar spine incurs a radiation dose ranging from 3.5 mSv to 19.5 mSv as well as relevant costs and is commonly necessary for spinal neuronavigation. Mitigation of the need for treatment-planning CT scans in the presence of MRI facilitated by MRI-based synthetic CT (sCT) would revolutionize navigated lumbar spine surgery. The authors aim to demonstrate, as a proof of concept, the capability of deep learning–based generation of sCT scans from MRI of the lumbar spine in 3 cases and to evaluate the potential of sCT for surgical planning.METHODSSynthetic CT reconstructions were made using a prototype version of the “BoneMRI” software. This deep learning–based image synthesis method relies on a convolutional neural network trained on paired MRI-CT data. A specific but generally available 4-minute 3D radiofrequency-spoiled T1-weighted multiple gradient echo MRI sequence was supplemented to a 1.5T lumbar spine MRI acquisition protocol.RESULTSIn the 3 presented cases, the prototype sCT method allowed voxel-wise radiodensity estimation from MRI, resulting in qualitatively adequate CT images of the lumbar spine based on visual inspection. Normal as well as pathological structures were reliably visualized. In the first case, in which a spiral CT scan was available as a control, a volume CT dose index (CTDIvol) of 12.9 mGy could thus have been avoided. Pedicle screw trajectories and screw thickness were estimable based on sCT findings.CONCLUSIONSThe evaluated prototype BoneMRI method enables generation of sCT scans from MRI images with only minor changes in the acquisition protocol, with a potential to reduce workflow complexity, radiation exposure, and costs. The quality of the generated CT scans was adequate based on visual inspection and could potentially be used for surgical planning, intraoperative neuronavigation, or for diagnostic purposes in an adjunctive manner.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7287
Author(s):  
Povendhan Palanisamy ◽  
Rajesh Elara Mohan ◽  
Archana Semwal ◽  
Lee Ming Jun Melivin ◽  
Braulio Félix Félix Gómez ◽  
...  

Human visual inspection of drains is laborious, time-consuming, and prone to accidents. This work presents an AI-enabled robot-assisted remote drain inspection and mapping framework using our in-house developed reconfigurable robot Raptor. The four-layer IoRT serves as a bridge between the users and the robots, through which seamless information sharing takes place. The Faster RCNN ResNet50, Faster RCNN ResNet101, and Faster RCNN Inception-ResNet-v2 deep learning frameworks were trained using a transfer learning scheme with six typical concrete defect classes and deployed in an IoRT framework remote defect detection task. The efficiency of the trained CNN algorithm and drain inspection robot Raptor was evaluated through various real-time drain inspection field trials using the SLAM technique. The experimental results indicate that robot’s maneuverability was stable, and its mapping and localization were also accurate in different drain types. Finally, for effective drain maintenance, the SLAM-based defect map was generated by fusing defect detection results in the lidar-SLAM map.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Eslam Mohammed Abdelkader

PurposeCracks on surface are often identified as one of the early indications of damage and possible future catastrophic structural failure. Thus, detection of cracks is vital for the timely inspection, health diagnosis and maintenance of infrastructures. However, conventional visual inspection-based methods are criticized for being subjective, greatly affected by inspector's expertise, labor-intensive and time-consuming.Design/methodology/approachThis paper proposes a novel self-adaptive-based method for automated and semantic crack detection and recognition in various infrastructures using computer vision technologies. The developed method is envisioned on three main models that are structured to circumvent the shortcomings of visual inspection in detection of cracks in walls, pavement and deck. The first model deploys modified visual geometry group network (VGG19) for extraction of global contextual and local deep learning features in an attempt to alleviate the drawbacks of hand-crafted features. The second model is conceptualized on the integration of K-nearest neighbors (KNN) and differential evolution (DE) algorithm for the automated optimization of its structure. The third model is designated for validating the developed method through an extensive four layers of performance evaluation and statistical comparisons.FindingsIt was observed that the developed method significantly outperformed other crack and detection models. For instance, the developed wall crack detection method accomplished overall accuracy, F-measure, Kappa coefficient, area under the curve, balanced accuracy, Matthew's correlation coefficient and Youden's index of 99.62%, 99.16%, 0.998, 0.998, 99.17%, 0.989 and 0.983, respectively.Originality/valueLiterature review lacks an efficient method which can look at crack detection and recognition of an ensemble of infrastructures. Furthermore, there is absence of systematic and detailed comparisons between crack detection and recognition models.


2020 ◽  
Vol 34 (07) ◽  
pp. 11957-11965 ◽  
Author(s):  
Aniruddha Saha ◽  
Akshayvarun Subramanya ◽  
Hamed Pirsiavash

With the success of deep learning algorithms in various domains, studying adversarial attacks to secure deep models in real world applications has become an important research topic. Backdoor attacks are a form of adversarial attacks on deep networks where the attacker provides poisoned data to the victim to train the model with, and then activates the attack by showing a specific small trigger pattern at the test time. Most state-of-the-art backdoor attacks either provide mislabeled poisoning data that is possible to identify by visual inspection, reveal the trigger in the poisoned data, or use noise to hide the trigger. We propose a novel form of backdoor attack where poisoned data look natural with correct labels and also more importantly, the attacker hides the trigger in the poisoned data and keeps the trigger secret until the test time. We perform an extensive study on various image classification settings and show that our attack can fool the model by pasting the trigger at random locations on unseen images although the model performs well on clean data. We also show that our proposed attack cannot be easily defended using a state-of-the-art defense algorithm for backdoor attacks.


Sign in / Sign up

Export Citation Format

Share Document