scholarly journals Fast Automatic Vehicle Detection in UAV Images Using Convolutional Neural Networks

2020 ◽  
Vol 12 (12) ◽  
pp. 1994 ◽  
Author(s):  
Xin Luo ◽  
Xiaoyue Tian ◽  
Huijie Zhang ◽  
Weimin Hou ◽  
Geng Leng ◽  
...  

Vehicle targets in unmanned aerial vehicle (UAV) images are generally small, so a significant amount of detailed information on targets may be lost after neural computing, which leads to the poor performances of the existing recognition algorithms. Based on convolutional neural networks that utilize the YOLOv3 algorithm, this article focuses on the development of a quick automatic vehicle detection method for UAV images. First, a vehicle dataset for target recognition is constructed. Then, a novel YOLOv3 vehicle detection framework is proposed according to the following characteristics: The vehicle targets in the UAV image are relatively small and dense. The average precision (AP) increased by 5.48%, from 92.01% to 97.49%, which still remains the rather high processing speed of the YOLO network. Finally, the proposed framework is tested using three datasets: COWC, VEDAI, and CAR. The experimental results demonstrate that our method had a better detection capability.

2021 ◽  
Vol 13 (13) ◽  
pp. 2627
Author(s):  
Marks Melo Moura ◽  
Luiz Eduardo Soares de Oliveira ◽  
Carlos Roberto Sanquetta ◽  
Alexis Bastos ◽  
Midhun Mohan ◽  
...  

Precise assessments of forest species’ composition help analyze biodiversity patterns, estimate wood stocks, and improve carbon stock estimates. Therefore, the objective of this work was to evaluate the use of high-resolution images obtained from Unmanned Aerial Vehicle (UAV) for the identification of forest species in areas of forest regeneration in the Amazon. For this purpose, convolutional neural networks (CNN) were trained using the Keras–Tensorflow package with the faster_rcnn_inception_v2_pets model. Samples of six forest species were used to train CNN. From these, attempts were made with the number of thresholds, which is the cutoff value of the function; any value below this output is considered 0, and values above are treated as an output 1; that is, values above the value stipulated in the Threshold are considered as identified species. The results showed that the reduction in the threshold decreases the accuracy of identification, as well as the overlap of the polygons of species identification. However, in comparison with the data collected in the field, it was observed that there exists a high correlation between the trees identified by the CNN and those observed in the plots. The statistical metrics used to validate the classification results showed that CNN are able to identify species with accuracy above 90%. Based on our results, which demonstrate good accuracy and precision in the identification of species, we conclude that convolutional neural networks are an effective tool in classifying objects from UAV images.


2021 ◽  
Vol 13 (3) ◽  
pp. 809-820
Author(s):  
V. Sowmya ◽  
R. Radha

Vehicle detection and recognition require demanding advanced computational intelligence and resources in a real-time traffic surveillance system for effective traffic management of all possible contingencies. One of the focus areas of deep intelligent systems is to facilitate vehicle detection and recognition techniques for robust traffic management of heavy vehicles. The following are such sophisticated mechanisms: Support Vector Machine (SVM), Convolutional Neural Networks (CNN), Regional Convolutional Neural Networks (R-CNN), You Only Look Once (YOLO) model, etcetera. Accordingly, it is pivotal to choose the precise algorithm for vehicle detection and recognition, which also addresses the real-time environment. In this study, a comparison of deep learning algorithms, such as the Faster R-CNN, YOLOv2, YOLOv3, and YOLOv4, are focused on diverse aspects of the features. Two entities for transport heavy vehicles, the buses and trucks, constitute detection and recognition elements in this proposed work. The mechanics of data augmentation and transfer-learning is implemented in the model; to build, execute, train, and test for detection and recognition to avoid over-fitting and improve speed and accuracy. Extensive empirical evaluation is conducted on two standard datasets such as COCO and PASCAL VOC 2007. Finally, comparative results and analyses are presented based on real-time.


2021 ◽  
Vol 5 (4) ◽  
pp. 50
Author(s):  
Rafik Gouiaa ◽  
Moulay A. Akhloufi ◽  
Mozhdeh Shahbazi

Automatically estimating the number of people in unconstrained scenes is a crucial yet challenging task in different real-world applications, including video surveillance, public safety, urban planning, and traffic monitoring. In addition, methods developed to estimate the number of people can be adapted and applied to related tasks in various fields, such as plant counting, vehicle counting, and cell microscopy. Many challenges and problems face crowd counting, including cluttered scenes, extreme occlusions, scale variation, and changes in camera perspective. Therefore, in the past few years, tremendous research efforts have been devoted to crowd counting, and numerous excellent techniques have been proposed. The significant progress in crowd counting methods in recent years is mostly attributed to advances in deep convolution neural networks (CNNs) as well as to public crowd counting datasets. In this work, we review the papers that have been published in the last decade and provide a comprehensive survey of the recent CNNs based crowd counting techniques. We briefly review detection-based, regression-based, and traditional density estimation based approaches. Then, we delve into detail regarding the deep learning based density estimation approaches and recently published datasets. In addition, we discuss the potential applications of crowd counting and in particular its applications using unmanned aerial vehicle (UAV) images.


2019 ◽  
Vol 11 (12) ◽  
pp. 1461 ◽  
Author(s):  
Husam A. H. Al-Najjar ◽  
Bahareh Kalantar ◽  
Biswajeet Pradhan ◽  
Vahideh Saeidi ◽  
Alfian Abdul Halin ◽  
...  

In recent years, remote sensing researchers have investigated the use of different modalities (or combinations of modalities) for classification tasks. Such modalities can be extracted via a diverse range of sensors and images. Currently, there are no (or only a few) studies that have been done to increase the land cover classification accuracy via unmanned aerial vehicle (UAV)–digital surface model (DSM) fused datasets. Therefore, this study looks at improving the accuracy of these datasets by exploiting convolutional neural networks (CNNs). In this work, we focus on the fusion of DSM and UAV images for land use/land cover mapping via classification into seven classes: bare land, buildings, dense vegetation/trees, grassland, paved roads, shadows, and water bodies. Specifically, we investigated the effectiveness of the two datasets with the aim of inspecting whether the fused DSM yields remarkable outcomes for land cover classification. The datasets were: (i) only orthomosaic image data (Red, Green and Blue channel data), and (ii) a fusion of the orthomosaic image and DSM data, where the final classification was performed using a CNN. CNN, as a classification method, is promising due to hierarchical learning structure, regulating and weight sharing with respect to training data, generalization, optimization and parameters reduction, automatic feature extraction and robust discrimination ability with high performance. The experimental results show that a CNN trained on the fused dataset obtains better results with Kappa index of ~0.98, an average accuracy of 0.97 and final overall accuracy of 0.98. Comparing accuracies between the CNN with DSM result and the CNN without DSM result for the overall accuracy, average accuracy and Kappa index revealed an improvement of 1.2%, 1.8% and 1.5%, respectively. Accordingly, adding the heights of features such as buildings and trees improved the differentiation between vegetation specifically where plants were dense.


2019 ◽  
Vol 164 ◽  
pp. 104932 ◽  
Author(s):  
Willian Paraguassu Amorim ◽  
Everton Castelão Tetila ◽  
Hemerson Pistori ◽  
João Paulo Papa

Sign in / Sign up

Export Citation Format

Share Document