Augmented reality for enhanced visual inspection through knowledge-based deep learning

2020 ◽  
pp. 147592172097698
Author(s):  
Shaohan Wang ◽  
Sakib Ashraf Zargar ◽  
Fuh-Gwo Yuan

A two-stage knowledge-based deep learning algorithm is presented for enabling automated damage detection in real-time using the augmented reality smart glasses. The first stage of the algorithm entails the identification of damage prone zones within the region of interest. This requires domain knowledge about the damage as well as the structure being inspected. In the second stage, automated damage detection is performed independently within each of the identified zones starting with the one that is the most damage prone. For real-time visual inspection enhancement using the augmented reality smart glasses, this two-stage approach not only ensures computational feasibility and efficiency but also significantly improves the probability of detection when dealing with structures with complex geometric features. A pilot study is conducted using hands-free Epson BT-300 smart glasses during which two distinct tasks are performed: First, using a single deep learning model deployed on the augmented reality smart glasses, automatic detection and classification of corrosion/fatigue, which is the most common cause of failure in high-strength materials, is performed. Then, in order to highlight the efficacy of the proposed two-stage approach, the more challenging task of defect detection in a multi-joint bolted region is addressed. The pilot study is conducted without any artificial control of external conditions like acquisition angles, lighting, and so on. While automating the visual inspection process is not a new concept for large-scale structures, in most cases, assessment of the collected data is performed offline. The algorithms/techniques used therein cannot be implemented directly on computationally limited devices such as the hands-free augmented reality glasses which could then be used by inspectors in the field for real-time assistance. The proposed approach serves to overcome this bottleneck.

2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


2020 ◽  
Vol 13 (1) ◽  
pp. 89
Author(s):  
Manuel Carranza-García ◽  
Jesús Torres-Mateo ◽  
Pedro Lara-Benítez ◽  
Jorge García-Gutiérrez

Object detection using remote sensing data is a key task of the perception systems of self-driving vehicles. While many generic deep learning architectures have been proposed for this problem, there is little guidance on their suitability when using them in a particular scenario such as autonomous driving. In this work, we aim to assess the performance of existing 2D detection systems on a multi-class problem (vehicles, pedestrians, and cyclists) with images obtained from the on-board camera sensors of a car. We evaluate several one-stage (RetinaNet, FCOS, and YOLOv3) and two-stage (Faster R-CNN) deep learning meta-architectures under different image resolutions and feature extractors (ResNet, ResNeXt, Res2Net, DarkNet, and MobileNet). These models are trained using transfer learning and compared in terms of both precision and efficiency, with special attention to the real-time requirements of this context. For the experimental study, we use the Waymo Open Dataset, which is the largest existing benchmark. Despite the rising popularity of one-stage detectors, our findings show that two-stage detectors still provide the most robust performance. Faster R-CNN models outperform one-stage detectors in accuracy, being also more reliable in the detection of minority classes. Faster R-CNN Res2Net-101 achieves the best speed/accuracy tradeoff but needs lower resolution images to reach real-time speed. Furthermore, the anchor-free FCOS detector is a slightly faster alternative to RetinaNet, with similar precision and lower memory usage.


Author(s):  
Karen A. Moore ◽  
Robert Carrington ◽  
John Richardson

The U.S. Dept of Energy Idaho National Engineering and Environmental Laboratory (INEEL) has developed and successfully tested a real-time pipeline damage detection and location system. This system uses porous metal resistive traces applied to the pipe to detect and locate damage. The porous metal resistive traces are sprayed along the length of a pipeline. The unique nature and arrangement of the traces allows locating the damage in real time along miles of pipe. This system allows pipeline operators to detect damage when and where it is occurring, and the decision to shut down a transmission pipeline can be made with actual real-time data, instead of conservative estimates from visual inspection above the area.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Prashant Kumar ◽  
Batchu Supraja ◽  
S. Narasimha Swamy ◽  
Solomon Raju Kota

Author(s):  
Rita Francese ◽  
Maria Frasca ◽  
Michele Risi ◽  
Genoveffa Tortora

AbstractMelanoma is considered the deadliest skin cancer and when it is in an advanced state it is difficult to treat. Diagnoses are visually performed by dermatologists, by naked-eye observation. This paper proposes an augmented reality smartphone application for supporting the dermatologist in the real-time analysis of a skin lesion. The app augments the camera view with information related to the lesion features generally measured by the dermatologist for formulating the diagnosis. The lesion is also classified by a deep learning approach for identifying melanoma. The real-time process adopted for generating the augmented content is described. The real-time performances are also evaluated and a user study is also conducted. Results revealed that the real-time process may be entirely executed on the Smartphone and that the support provided is well judged by the target users.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7687
Author(s):  
Karolina Nurzynska ◽  
Przemysław Skurowski ◽  
Magdalena Pawlyta ◽  
Krzysztof Cyran

The goal of the WrightBroS project is to design a system supporting the training of pilots in a flight simulator. The desired software should work on smart glasses supplementing the visual information with augmented reality data, displaying, for instance, additional training information or descriptions of visible devices in real time. Therefore, the rapid recognition of observed objects and their exact positioning is crucial for successful deployment. The keypoint descriptor approach is a natural framework that is used for this purpose. For this to be applied, the thorough examination of specific keypoint location methods and types of keypoint descriptors is required first, as these are essential factors that affect the overall accuracy of the approach. In the presented research, we prepared a dedicated database presenting 27 various devices of flight simulator. Then, we used it to compare existing state-of-the-art techniques and verify their applicability. We investigated the time necessary for the computation of a keypoint position, the time needed for the preparation of a descriptor, and the classification accuracy of the considered approaches. In total, we compared the outcomes of 12 keypoint location methods and 10 keypoint descriptors. The best scores recorded for our database were almost 96% for a combination of the ORB method for keypoint localization followed by the BRISK approach as a descriptor.


Sign in / Sign up

Export Citation Format

Share Document