scholarly journals Deep Learning Approaches on Defect Detection in High Resolution Aerial Images of Insulators

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1033
Author(s):  
Qiaodi Wen ◽  
Ziqi Luo ◽  
Ruitao Chen ◽  
Yifan Yang ◽  
Guofa Li

By detecting the defect location in high-resolution insulator images collected by unmanned aerial vehicle (UAV) in various environments, the occurrence of power failure can be timely detected and the caused economic loss can be reduced. However, the accuracies of existing detection methods are greatly limited by the complex background interference and small target detection. To solve this problem, two deep learning methods based on Faster R-CNN (faster region-based convolutional neural network) are proposed in this paper, namely Exact R-CNN (exact region-based convolutional neural network) and CME-CNN (cascade the mask extraction and exact region-based convolutional neural network). Firstly, we proposed an Exact R-CNN based on a series of advanced techniques including FPN (feature pyramid network), cascade regression, and GIoU (generalized intersection over union). RoI Align (region of interest align) is introduced to replace RoI pooling (region of interest pooling) to address the misalignment problem, and the depthwise separable convolution and linear bottleneck are introduced to reduce the computational burden. Secondly, a new pipeline is innovatively proposed to improve the performance of insulator defect detection, namely CME-CNN. In our proposed CME-CNN, an insulator mask image is firstly generated to eliminate the complex background by using an encoder-decoder mask extraction network, and then the Exact R-CNN is used to detect the insulator defects. The experimental results show that our proposed method can effectively detect insulator defects, and its accuracy is better than the examined mainstream target detection algorithms.

2020 ◽  
pp. 004051752092860 ◽  
Author(s):  
Junfeng Jing ◽  
Zhen Wang ◽  
Matthias Rätsch ◽  
Huanhuan Zhang

Deep learning–based fabric defect detection methods have been widely investigated to improve production efficiency and product quality. Although deep learning–based methods have proved to be powerful tools for classification and segmentation, some key issues remain to be addressed when applied to real applications. Firstly, the actual fabric production conditions of factories necessitate higher real-time performance of methods. Moreover, fabric defects as abnormal samples are very rare compared with normal samples, which results in data imbalance. It makes model training based on deep learning challenging. To solve these problems, an extremely efficient convolutional neural network, Mobile-Unet, is proposed to achieve the end-to-end defect segmentation. The median frequency balancing loss function is used to overcome the challenge of sample imbalance. Additionally, Mobile-Unet introduces depth-wise separable convolution, which dramatically reduces the complexity cost and model size of the network. It comprises two parts: encoder and decoder. The MobileNetV2 feature extractor is used as the encoder, and then five deconvolution layers are added as the decoder. Finally, the softmax layer is used to generate the segmentation mask. The performance of the proposed model has been evaluated by public fabric datasets and self-built fabric datasets. In comparison with other methods, the experimental results demonstrate that segmentation accuracy and detection speed in the proposed method achieve state-of-the-art performance.


2019 ◽  
Vol 11 (6) ◽  
pp. 631 ◽  
Author(s):  
Shaoming Zhang ◽  
Ruize Wu ◽  
Kunyuan Xu ◽  
Jianmei Wang ◽  
Weiwei Sun

Offshore and inland river ship detection has been studied on both synthetic aperture radar (SAR) and optical remote sensing imagery. However, the classic ship detection methods based on SAR images can cause a high false alarm ratio and be influenced by the sea surface model, especially on inland rivers and in offshore areas. The classic detection methods based on optical images do not perform well on small and gathering ships. This paper adopts the idea of deep networks and presents a fast regional-based convolutional neural network (R-CNN) method to detect ships from high-resolution remote sensing imagery. First, we choose GaoFen-2 optical remote sensing images with a resolution of 1 m and preprocess the images with a support vector machine (SVM) to divide the large detection area into small regions of interest (ROI) that may contain ships. Then, we apply ship detection algorithms based on a region-based convolutional neural network (R-CNN) on ROI images. To improve the detection result of small and gathering ships, we adopt an effective target detection framework, Faster-RCNN, and improve the structure of its original convolutional neural network (CNN), VGG16, by using multiresolution convolutional features and performing ROI pooling on a larger feature map in a region proposal network (RPN). Finally, we compare the most effective classic ship detection method, the deformable part model (DPM), another two widely used target detection frameworks, the single shot multibox detector (SSD) and YOLOv2, the original VGG16-based Faster-RCNN, and our improved Faster-RCNN. Experimental results show that our improved Faster-RCNN method achieves a higher recall and accuracy for small ships and gathering ships. Therefore, it provides a very effective method for offshore and inland river ship detection based on high-resolution remote sensing imagery.


2021 ◽  
Vol 11 (13) ◽  
pp. 6085
Author(s):  
Jesus Salido ◽  
Vanesa Lomas ◽  
Jesus Ruiz-Santaquiteria ◽  
Oscar Deniz

There is a great need to implement preventive mechanisms against shootings and terrorist acts in public spaces with a large influx of people. While surveillance cameras have become common, the need for monitoring 24/7 and real-time response requires automatic detection methods. This paper presents a study based on three convolutional neural network (CNN) models applied to the automatic detection of handguns in video surveillance images. It aims to investigate the reduction of false positives by including pose information associated with the way the handguns are held in the images belonging to the training dataset. The results highlighted the best average precision (96.36%) and recall (97.23%) obtained by RetinaNet fine-tuned with the unfrozen ResNet-50 backbone and the best precision (96.23%) and F1 score values (93.36%) obtained by YOLOv3 when it was trained on the dataset including pose information. This last architecture was the only one that showed a consistent improvement—around 2%—when pose information was expressly considered during training.


2020 ◽  
Vol 12 (22) ◽  
pp. 9785
Author(s):  
Kisu Lee ◽  
Goopyo Hong ◽  
Lee Sael ◽  
Sanghyo Lee ◽  
Ha Young Kim

Defects in residential building façades affect the structural integrity of buildings and degrade external appearances. Defects in a building façade are typically managed using manpower during maintenance. This approach is time-consuming, yields subjective results, and can lead to accidents or casualties. To address this, we propose a building façade monitoring system that utilizes an object detection method based on deep learning to efficiently manage defects by minimizing the involvement of manpower. The dataset used for training a deep-learning-based network contains actual residential building façade images. Various building designs in these raw images make it difficult to detect defects because of their various types and complex backgrounds. We employed the faster regions with convolutional neural network (Faster R-CNN) structure for more accurate defect detection in such environments, achieving an average precision (intersection over union (IoU) = 0.5) of 62.7% for all types of trained defects. As it is difficult to detect defects in a training environment, it is necessary to improve the performance of the network. However, the object detection network employed in this study yields an excellent performance in complex real-world images, indicating the possibility of developing a system that would detect defects in more types of building façades.


Author(s):  
Dima M. Alalharith ◽  
Hajar M. Alharthi ◽  
Wejdan M. Alghamdi ◽  
Yasmine M. Alsenbel ◽  
Nida Aslam ◽  
...  

Computer-based technologies play a central role in the dentistry field, as they present many methods for diagnosing and detecting various diseases, such as periodontitis. The current study aimed to develop and evaluate the state-of-the-art object detection and recognition techniques and deep learning algorithms for the automatic detection of periodontal disease in orthodontic patients using intraoral images. In this study, a total of 134 intraoral images were divided into a training dataset (n = 107 [80%]) and a test dataset (n = 27 [20%]). Two Faster Region-based Convolutional Neural Network (R-CNN) models using ResNet-50 Convolutional Neural Network (CNN) were developed. The first model detects the teeth to locate the region of interest (ROI), while the second model detects gingival inflammation. The detection accuracy, precision, recall, and mean average precision (mAP) were calculated to verify the significance of the proposed model. The teeth detection model achieved an accuracy, precision, recall, and mAP of 100 %, 100%, 51.85%, and 100%, respectively. The inflammation detection model achieved an accuracy, precision, recall, and mAP of 77.12%, 88.02%, 41.75%, and 68.19%, respectively. This study proved the viability of deep learning models for the detection and diagnosis of gingivitis in intraoral images. Hence, this highlights its potential usability in the field of dentistry and aiding in reducing the severity of periodontal disease globally through preemptive non-invasive diagnosis.


2020 ◽  
Vol 12 (12) ◽  
pp. 1924 ◽  
Author(s):  
Hiroyuki Miura ◽  
Tomohiro Aridome ◽  
Masashi Matsuoka

A methodology for the automated identification of building damage from post-disaster aerial images was developed based on convolutional neural network (CNN) and building damage inventories. The aerial images and the building damage data obtained in the 2016 Kumamoto, and the 1995 Kobe, Japan earthquakes were analyzed. Since the roofs of many moderately damaged houses are covered with blue tarps immediately after disasters, not only collapsed and non-collapsed buildings but also the buildings covered with blue tarps were identified by the proposed method. The CNN architecture developed in this study correctly classifies the building damage with the accuracy of approximately 95 % in both earthquake data. We applied the developed CNN model to aerial images in Chiba, Japan, damaged by the typhoon in September 2019. The result shows that more than 90 % of the building damage are correctly classified by the CNN model.


2021 ◽  
Vol 936 (1) ◽  
pp. 012021
Author(s):  
Novi Anita ◽  
Bangun Muljo Sukojo ◽  
Sondy Hardian Meisajiwa ◽  
Muhammad Alfian Romadhon

Abstract There are many petroleum mining activities scattered in developing countries, such as Indonesia. Indonesia is one of the largest oil-producing countries in Southeast Asia with the 23rd ranking. Since the Dutch era, Indonesia has produced a very large amount of petroleum. One of the oil producing areas is “A” Village. There is an old well that produces petroleum oil which is still active with an age of more than 100 years, for now the oil well is still used by the local community as the main source of livelihood. With this activity, resulting in an oil pattern around the old oil refinery, which over time will absorb into the ground. This study aims to analyze and identify the oil pattern around the old oil refinery in the “A” area. The data used is in the form of High-Resolution Satellite Imagery (CSRT), namely Pleiades-1B with a spatial resolution of 1.5 meters. Data were identified using the Deep Learning Semantic method. For the limitation of this research is the administrative limit of XX Regency with a scale of 1: 25,000 as supporting data when cutting the image. The method used is the Deep Learning Convolutional Neural Network series. This research is based on how to wait for the method of the former oil spill which is the consideration of the consideration used. This study produced a land cover map that was classified into 3 categories, namely oil patterns area, area not affected by oil and vegetation. As a supporting value to show the accuracy of the classification results, an accuracy test method is used with the confusion matrix method. To show the accuracy of this study using thermal data taken from the field. Thermal data used in the form of numbers that show the temperature of each land cover. Based on the above reference, a research related to the analysis of very high-resolution image data (Pleiades-1B) will be conducted to examine the oil pattern. This research uses the deep learning series convolutional neural network (CNN) method. With this research, it is hoped that it can help agencies in knowing the right method to identify oil in mainland areas.


Processes ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 1678
Author(s):  
Yo-Ping Huang ◽  
Chun-Ming Su ◽  
Haobijam Basanta ◽  
Yau-Liang Tsai

The complexity of defect detection in a ceramic substrate causes interclass and intraclass imbalance problems. Identifying flaws in ceramic substrates has traditionally relied on aberrant material occurrences and characteristic quantities. However, defect substrates in ceramic are typically small and have a wide variety of defect distributions, thereby making defect detection more challenging and difficult. Thus, we propose a method for defect detection based on unsupervised learning and deep learning. First, the proposed method conducts K-means clustering for grouping instances according to their inherent complex characteristics. Second, the distribution of rarely occurring instances is balanced by using augmentation filters. Finally, a convolutional neural network is trained by using the balanced dataset. The effectiveness of the proposed method was validated by comparing the results with those of other methods. Experimental results show that the proposed method outperforms other methods.


— In present generation the detection of vehicle using aerial images plays an important role and mot challenging. The video understanding, border security are the applications of aerial images. To improve the performance of the system different detection methods are introduced. But these methods take more time in detection process. To overcome these convolutional neural network are introduced which will produce the successful design system. the main intent of this paper is to present the recognition system for aerial images using convolutional neural network. The proposed method improves the accuracy and speed after the detection process. At last aerial image is obtained by matching the image and textual description of classes.


2020 ◽  
Vol 64 (2) ◽  
pp. 20507-1-20507-10 ◽  
Author(s):  
Hee-Jin Yu ◽  
Chang-Hwan Son ◽  
Dong Hyuk Lee

Abstract Traditional approaches for the identification of leaf diseases involve the use of handcrafted features such as colors and textures for feature extraction. Therefore, these approaches may have limitations in extracting abundant and discriminative features. Although deep learning approaches have been recently introduced to overcome the shortcomings of traditional approaches, existing deep learning models such as VGG and ResNet have been used in these approaches. This indicates that the approach can be further improved to increase the discriminative power because the spatial attention mechanism to predict the background and spot areas (i.e., local areas with leaf diseases) has not been considered. Therefore, a new deep learning architecture, which is hereafter referred to as region-of-interest-aware deep convolutional neural network (ROI-aware DCNN), is proposed to make deep features more discriminative and increase classification performance. The primary idea is that leaf disease symptoms appear in leaf area, whereas the background region does not contain useful information regarding leaf diseases. To realize this, two subnetworks are designed. One subnetwork is the ROI subnetwork to provide more discriminative features from the background, leaf areas, and spot areas in the feature map. The other subnetwork is the classification subnetwork to increase the classification accuracy. To train the ROI-aware DCNN, the ROI subnetwork is first learned with a new image set containing the ground truth images where the background, leaf area, and spot area are divided. Subsequently, the entire network is trained in an end-to-end manner to connect the ROI subnetwork with the classification subnetwork through a concatenation layer. The experimental results confirm that the proposed ROI-aware DCNN can increase the discriminative power by predicting the areas in the feature map that are more important for leaf diseases identification. The results prove that the proposed method surpasses conventional state-of-the-art methods such as VGG, ResNet, SqueezeNet, bilinear model, and multiscale-based deep feature extraction and pooling.


Sign in / Sign up

Export Citation Format

Share Document