Asphalt Pothole Detection in UAV Images Using Convolutional Neural Networks

Author(s):  
Yuri V. Furusho Becker ◽  
Henrique Lopes Siqueira ◽  
Edson Takashi Matsubara ◽  
Wesley Nunes Goncalves ◽  
Jose Marcato Marcato
2021 ◽  
Vol 13 (13) ◽  
pp. 2627
Author(s):  
Marks Melo Moura ◽  
Luiz Eduardo Soares de Oliveira ◽  
Carlos Roberto Sanquetta ◽  
Alexis Bastos ◽  
Midhun Mohan ◽  
...  

Precise assessments of forest species’ composition help analyze biodiversity patterns, estimate wood stocks, and improve carbon stock estimates. Therefore, the objective of this work was to evaluate the use of high-resolution images obtained from Unmanned Aerial Vehicle (UAV) for the identification of forest species in areas of forest regeneration in the Amazon. For this purpose, convolutional neural networks (CNN) were trained using the Keras–Tensorflow package with the faster_rcnn_inception_v2_pets model. Samples of six forest species were used to train CNN. From these, attempts were made with the number of thresholds, which is the cutoff value of the function; any value below this output is considered 0, and values above are treated as an output 1; that is, values above the value stipulated in the Threshold are considered as identified species. The results showed that the reduction in the threshold decreases the accuracy of identification, as well as the overlap of the polygons of species identification. However, in comparison with the data collected in the field, it was observed that there exists a high correlation between the trees identified by the CNN and those observed in the plots. The statistical metrics used to validate the classification results showed that CNN are able to identify species with accuracy above 90%. Based on our results, which demonstrate good accuracy and precision in the identification of species, we conclude that convolutional neural networks are an effective tool in classifying objects from UAV images.


2019 ◽  
Vol 11 (12) ◽  
pp. 1461 ◽  
Author(s):  
Husam A. H. Al-Najjar ◽  
Bahareh Kalantar ◽  
Biswajeet Pradhan ◽  
Vahideh Saeidi ◽  
Alfian Abdul Halin ◽  
...  

In recent years, remote sensing researchers have investigated the use of different modalities (or combinations of modalities) for classification tasks. Such modalities can be extracted via a diverse range of sensors and images. Currently, there are no (or only a few) studies that have been done to increase the land cover classification accuracy via unmanned aerial vehicle (UAV)–digital surface model (DSM) fused datasets. Therefore, this study looks at improving the accuracy of these datasets by exploiting convolutional neural networks (CNNs). In this work, we focus on the fusion of DSM and UAV images for land use/land cover mapping via classification into seven classes: bare land, buildings, dense vegetation/trees, grassland, paved roads, shadows, and water bodies. Specifically, we investigated the effectiveness of the two datasets with the aim of inspecting whether the fused DSM yields remarkable outcomes for land cover classification. The datasets were: (i) only orthomosaic image data (Red, Green and Blue channel data), and (ii) a fusion of the orthomosaic image and DSM data, where the final classification was performed using a CNN. CNN, as a classification method, is promising due to hierarchical learning structure, regulating and weight sharing with respect to training data, generalization, optimization and parameters reduction, automatic feature extraction and robust discrimination ability with high performance. The experimental results show that a CNN trained on the fused dataset obtains better results with Kappa index of ~0.98, an average accuracy of 0.97 and final overall accuracy of 0.98. Comparing accuracies between the CNN with DSM result and the CNN without DSM result for the overall accuracy, average accuracy and Kappa index revealed an improvement of 1.2%, 1.8% and 1.5%, respectively. Accordingly, adding the heights of features such as buildings and trees improved the differentiation between vegetation specifically where plants were dense.


2019 ◽  
Vol 164 ◽  
pp. 104932 ◽  
Author(s):  
Willian Paraguassu Amorim ◽  
Everton Castelão Tetila ◽  
Hemerson Pistori ◽  
João Paulo Papa

2021 ◽  
Vol 6 (2) ◽  
pp. 887-893
Author(s):  
Kenza Aitelkadi ◽  
Hicham Outmghoust ◽  
Salahddine laarab ◽  
Kaltoum Moumayiz ◽  
Imane Sebari

2019 ◽  
Vol 41 (1) ◽  
pp. 31-52 ◽  
Author(s):  
Wen Shao ◽  
Rei Kawakami ◽  
Ryota Yoshihashi ◽  
Shaodi You ◽  
Hidemichi Kawase ◽  
...  

Author(s):  
Jayme Barbedo ◽  
Luciano Koenigkan ◽  
Patrícia Santos

The evolution in imaging technologies and artificial intelligence algorithms, coupled with improvements in UAV technology, has enabled the use of unmanned aircraft in a wide range of applications. The feasibility of this kind of approach for cattle monitoring has been demonstrated by several studies, but practical use is still challenging due to the particular characteristics of this application, such as the need to track mobile targets and the extensive areas that need to be covered in most cases. The objective of this study was to investigate the feasibility of using a tilted angle to increase the area covered by each image. Deep Convolutional Neural Networks (Xception architecture) were used to generate the models for the experiments, which covered aspects like ideal input dimensions, effect of the distance between animals and sensor, effect of classification error on the overall detection process, and impact of physical obstacles on the accuracy of the model. Experimental results indicate that oblique images can be successfully used under certain conditions, but some practical limitations need to be addressed in order to make this approach appealing.


Sign in / Sign up

Export Citation Format

Share Document