scholarly journals An Adaptive Deblurring Vehicle Detection Method for High-Speed Moving Drones: Resistance to Shake

Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1358
Author(s):  
Yan Liu ◽  
Jingwen Wang ◽  
Tiantian Qiu ◽  
Wenting Qi

Vehicle detection is an essential part of an intelligent traffic system, which is an important research field in drone application. Because unmanned aerial vehicles (UAVs) are rarely configured with stable camera platforms, aerial images are easily blurred. There is a challenge for detectors to accurately locate vehicles in blurred images in the target detection process. To improve the detection performance of blurred images, an end-to-end adaptive vehicle detection algorithm (DCNet) for drones is proposed in this article. First, the clarity evaluation module is used to determine adaptively whether the input image is a blurred image using improved information entropy. An improved GAN called Drone-GAN is proposed to enhance the vehicle features of blurred images. Extensive experiments were performed, the results of which show that the proposed method can detect both blurred and clear images well in poor environments (complex illumination and occlusion). The detector proposed achieves larger gains compared with SOTA detectors. The proposed method can enhance the vehicle feature details in blurred images effectively and improve the detection accuracy of blurred aerial images, which shows good performance with regard to resistance to shake.

2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Hai Wang ◽  
Xinyu Lou ◽  
Yingfeng Cai ◽  
Yicheng Li ◽  
Long Chen

Vehicle detection is one of the most important environment perception tasks for autonomous vehicles. The traditional vision-based vehicle detection methods are not accurate enough especially for small and occluded targets, while the light detection and ranging- (lidar-) based methods are good in detecting obstacles but they are time-consuming and have a low classification rate for different target types. Focusing on these shortcomings to make the full use of the advantages of the depth information of lidar and the obstacle classification ability of vision, this work proposes a real-time vehicle detection algorithm which fuses vision and lidar point cloud information. Firstly, the obstacles are detected by the grid projection method using the lidar point cloud information. Then, the obstacles are mapped to the image to get several separated regions of interest (ROIs). After that, the ROIs are expanded based on the dynamic threshold and merged to generate the final ROI. Finally, a deep learning method named You Only Look Once (YOLO) is applied on the ROI to detect vehicles. The experimental results on the KITTI dataset demonstrate that the proposed algorithm has high detection accuracy and good real-time performance. Compared with the detection method based only on the YOLO deep learning, the mean average precision (mAP) is increased by 17%.


2010 ◽  
Vol 439-440 ◽  
pp. 493-498
Author(s):  
Shao Jie Sun ◽  
Qiong Wu ◽  
Guo Hui Li

Overlap-blur is caused by the relative movement of high speed between the camera and the object during the exposure process, which is one of the most common phenomenons of image degradation during the criminal detection forensics work. Based on the analysis of the overlap-blurred image’s characteristic, a coded-shutter model is proposed to approximate the nature of overlap-blur. As the first attempt, using the coded-shutter model, an image deblurring algorithm is designed for the restoration of the overlap-blurred images. The experiment results show the validity and rationality of the coded-shutter model for deblurring the overlap-blurred images. When tested on the real overlap-blurred photographs, the proposed algorithm can restore the information of interest in the blurred images better, which demonstrates the higher practical value of the algorithm.


Author(s):  
Imran Shafi ◽  
Imtiaz Hussain ◽  
Jamil Ahmad ◽  
Pyoung Won Kim ◽  
Gyu Sang Choi ◽  
...  

AbstractNon-standard license plates are a part of current traffic trends in Pakistan. Private number plates should be recognized and, monitored for several purposes including security as well as a well-developed traffic system. There is a challenging task for the authorities to recognize and trace the locations for the certain number plate vehicle. In a developing country like Pakistan, it is tough to have higher constraints on the efficiency of any license plate identification and recognition algorithm. Character recognition efficiency should be a route map for the achievement of the desired results within the specified constraints. The main goal of this study is to devise a robust detection and recognition mechanism for non-standard, transitional vehicle license plates generally found in developing countries. Improvement in the character recognition efficiency of drawn and printed plates in different styles and fonts using single using multiple state-of-the-art technologies including machine-learning (ML) models. For the mentioned study, 53-layer deep convolutional neural network (CNN) architecture based on the latest variant of object detection algorithm-You Only Look Once (YOLOv3) is employed. The proposed approach can learn the rich feature representations from the data of diversified license plates. The input image is first pre-processed for quality improvement, followed by dividing it into suitable-sized grid cells to find the correct location of the license plate. For training the CNN, license plate characters are segmented. Lastly, the results are post-processed and the accuracy of the proposed model is determined through standard benchmarks. The proposed method is successfully tested on a large image dataset consisting of eight different types of license plates from different provinces in Pakistan. The proposed system is expected to play an important role in implementing vehicle tracking, payment for parking fees, detection of vehicle over-speed limits, reducing road accidents, and identification of unauthorized vehicles. The outcome shows that the proposed approach achieves a plate detection accuracy of 97.82% and the character recognition accuracy of 96%.


2011 ◽  
Vol 130-134 ◽  
pp. 2429-2432
Author(s):  
Liang Xiu Zhang ◽  
Xu Yun Qiu ◽  
Zhu Lin Zhang ◽  
Yu Lin Wang

Realtime on-road vehicle detection is a key technology in many transportation applications, such as driver assistance, autonomous driving and active safety. A vehicle detection algorithm based on cascaded structure is introduced. Haar-like features are used to built model in this application, and GAB algorithm is chosen to train the strong classifiers. Then, the real-time on-road vehicle classifier based on cascaded structure is constructed by combining the strong classifiers. Experimental results show that the cascaded classifier is excellent in both detection accuracy and computational efficiency, which ensures its application to collision warning system.


2021 ◽  
Vol 12 ◽  
Author(s):  
Zhenwang Qin ◽  
Wensheng Wang ◽  
Karl-Heinz Dammer ◽  
Leifeng Guo ◽  
Zhen Cao

To date, unmanned aerial vehicles (UAVs), commonly known as drones, have been widely used in precision agriculture (PA) for crop monitoring and crop spraying, allowing farmers to increase the efficiency of the farming process, meanwhile reducing environmental impact. However, to spray pesticides effectively and safely to the trees in small fields or rugged environments, such as mountain areas, is still an open question. To bridge this gap, in this study, an onboard computer vision (CV) component for UAVs is developed. The system is low-cost, flexible, and energy-effective. It consists of two parts, the hardware part is an Intel Neural Compute Stick 2 (NCS2), and the software part is an object detection algorithm named the Ag-YOLO. The NCS2 is 18 grams in weight, 1.5 watts in energy consumption, and costs about $66. The proposed model Ag-YOLO is inspired by You Only Look Once (YOLO), trained and tested with aerial images of areca plantations, and shows high accuracy (F1 score = 0.9205) and high speed [36.5 frames per second (fps)] on the target hardware. Compared to YOLOv3-Tiny, Ag-YOLO is 2× faster while using 12× fewer parameters. Based on this study, crop monitoring and crop spraying can be synchronized into one process, so that smart and precise spraying can be performed.


Author(s):  
Guohua Liu ◽  
Qintao Zhang

The new coronavirus spreads widely through droplets, aerosols and other carriers. Wearing a mask can effectively reduce the probability of being infected by the virus. Therefore, it is necessary to monitor whether people wear masks in public to prevent the virus from spreading further. However, there is no mature general mask wearing detection algorithm. Based on tiny YOLOv3 algorithm, this paper realizes the detection of face with mask and face without mask, and proposes an improvement to the algorithm. First, the loss function of the bounding box regression is optimized, and the original loss function is optimized as the Generalized Intersection over Union (GIoU) loss. Second, the network structure is improved, the residual unit is introduced into the backbone to increase the depth of the network and the detection of two scales is expanded to three. Finally, the size of anchor boxes is clustered based on [Formula: see text]-means algorithm. The experimental results on the constructed dataset show that, compared with the tiny YOLOv3 algorithm, the algorithm proposed in this paper improves the detection accuracy while maintaining high-speed inference ability.


2021 ◽  
Vol 13 (5) ◽  
pp. 965
Author(s):  
Marek Kraft ◽  
Mateusz Piechocki ◽  
Bartosz Ptak ◽  
Krzysztof Walas

Public littering and discarded trash are, despite the effort being put to limit it, still a serious ecological, aesthetic, and social problem. The problematic waste is usually localised and picked up by designated personnel, which is a tiresome, time-consuming task. This paper proposes a low-cost solution enabling the localisation of trash and litter objects in low altitude imagery collected by an unmanned aerial vehicle (UAV) during an autonomous patrol mission. The objects of interest are detected in the acquired images and put on the global map using a set of onboard sensors commonly found in typical UAV autopilots. The core object detection algorithm is based on deep, convolutional neural networks. Since the task is domain-specific, a dedicated dataset of images containing objects of interest was collected and annotated. The dataset is made publicly available, and its description is contained in the paper. The dataset was used to test a range of embedded devices enabling the deployment of deep neural networks for inference onboard the UAV. The results of measurements in terms of detection accuracy and processing speed are enclosed, and recommendations for the neural network model and hardware platform are given based on the obtained values. The complete system can be put together using inexpensive, off-the-shelf components, and perform autonomous localisation of discarded trash, relieving human personnel of this burdensome task, and enabling automated pickup planning.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Rui Wang ◽  
Ziyue Wang ◽  
Zhengwei Xu ◽  
Chi Wang ◽  
Qiang Li ◽  
...  

Object detection is an important part of autonomous driving technology. To ensure the safe running of vehicles at high speed, real-time and accurate detection of all the objects on the road is required. How to balance the speed and accuracy of detection is a hot research topic in recent years. This paper puts forward a one-stage object detection algorithm based on YOLOv4, which improves the detection accuracy and supports real-time operation. The backbone of the algorithm doubles the stacking times of the last residual block of CSPDarkNet53. The neck of the algorithm replaces the SPP with the RFB structure, improves the PAN structure of the feature fusion module, adds the attention mechanism CBAM and CA structure to the backbone and neck structure, and finally reduces the overall width of the network to the original 3/4, so as to reduce the model parameters and improve the inference speed. Compared with YOLOv4, the algorithm in this paper improves the average accuracy on KITTI dataset by 2.06% and BDD dataset by 2.95%. When the detection accuracy is almost unchanged, the inference speed of this algorithm is increased by 9.14%, and it can detect in real time at a speed of more than 58.47 FPS.


Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1490
Author(s):  
Yan Liu ◽  
Tiantian Qiu ◽  
Jingwen Wang ◽  
Wenting Qi

Vehicle detection plays a vital role in the design of Automatic Driving System (ADS), which has achieved remarkable improvements in recent years. However, vehicle detection in night scenes still has considerable challenges for the reason that the vehicle features are not obvious and are easily affected by complex road lighting or lights from vehicles. In this paper, a high-accuracy vehicle detection algorithm is proposed to detect vehicles in night scenes. Firstly, an improved Generative Adversarial Network (GAN), named Attentive GAN, is used to enhance the vehicle features of nighttime images. Then, with the purpose of achieving a higher detection accuracy, a multiple local regression is employed in the regression branch, which predicts multiple bounding box offsets. An improved Region of Interest (RoI) pooling method is used to get distinguishing features in a classification branch based on Faster Region-based Convolutional Neural Network (R-CNN). Cross entropy loss is introduced to improve the accuracy of classification branch. The proposed method is examined with the proposed dataset, which is composed of the selected nighttime images from BDD-100k dataset (Berkeley Diverse Driving Database, including 100,000 images). Compared with a series of state-of-the-art detectors, the experiments demonstrate that the proposed algorithm can effectively contribute to vehicle detection accuracy in nighttime.


Sign in / Sign up

Export Citation Format

Share Document