Proposed Concept for Specifying Vehicle Detection Performance

2009 ◽  
Vol 2128 (1) ◽  
pp. 161-172 ◽  
Author(s):  
Dan Middleton ◽  
Ryan Longmire ◽  
Darcy M. Bullock ◽  
James R. Sturdevant
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Zishuo Han ◽  
Chunping Wang ◽  
Qiang Fu

Purpose The purpose of this paper is to use the most popular deep learning algorithm to complete the vehicle detection in the urban area of MiniSAR image, and provide reliable means for ground monitoring. Design/methodology/approach An accurate detector called the rotation region-based convolution neural networks (CNN) with multilayer fusion and multidimensional attention (M2R-Net) is proposed in this paper. Specifically, M2R-Net adopts the multilayer feature fusion strategy to extract feature maps with more extensive information. Next, the authors implement the multidimensional attention network to highlight target areas. Furthermore, a novel balanced sampling strategy for hard and easy positive-negative samples and a global balanced loss function are applied to deal with spatial imbalance and objective imbalance. Finally, rotation anchors are used to predict and calibrate the minimum circumscribed rectangle of vehicles. Findings By analyzing many groups of experiments, the validity and universality of the proposed model are verified. More importantly, comparisons with SSD, LRTDet, RFCN, DFPN, CMF-RCNN, R3Det, SCRDet demonstrate that M2R-Net has state-of-the-art detection performance. Research limitations/implications The progress in the field of MiniSAR application has been slow due to strong speckle noise, phase error, complex environments and a low signal-to-noise ratio. In addition, four kinds of imbalances, i.e. spatial imbalance, scale imbalance, class imbalance and objective imbalance, in object detection based on the CNN greatly inhibit the optimization of detection performance. Originality/value This research can not only enrich the means of daily traffic monitoring but also be used for enemy intelligence reconnaissance in wartime.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7267
Author(s):  
Luiz G. Galvao ◽  
Maysam Abbod ◽  
Tatiana Kalganova ◽  
Vasile Palade ◽  
Md Nazmul Huda

Autonomous Vehicles (AVs) have the potential to solve many traffic problems, such as accidents, congestion and pollution. However, there are still challenges to overcome, for instance, AVs need to accurately perceive their environment to safely navigate in busy urban scenarios. The aim of this paper is to review recent articles on computer vision techniques that can be used to build an AV perception system. AV perception systems need to accurately detect non-static objects and predict their behaviour, as well as to detect static objects and recognise the information they are providing. This paper, in particular, focuses on the computer vision techniques used to detect pedestrians and vehicles. There have been many papers and reviews on pedestrians and vehicles detection so far. However, most of the past papers only reviewed pedestrian or vehicle detection separately. This review aims to present an overview of the AV systems in general, and then review and investigate several detection computer vision techniques for pedestrians and vehicles. The review concludes that both traditional and Deep Learning (DL) techniques have been used for pedestrian and vehicle detection; however, DL techniques have shown the best results. Although good detection results have been achieved for pedestrians and vehicles, the current algorithms still struggle to detect small, occluded, and truncated objects. In addition, there is limited research on how to improve detection performance in difficult light and weather conditions. Most of the algorithms have been tested on well-recognised datasets such as Caltech and KITTI; however, these datasets have their own limitations. Therefore, this paper recommends that future works should be implemented on more new challenging datasets, such as PIE and BDD100K.


2019 ◽  
Vol 9 (22) ◽  
pp. 4769 ◽  
Author(s):  
Ho Kwan Leung ◽  
Xiu-Zhi Chen ◽  
Chao-Wei Yu ◽  
Hong-Yi Liang ◽  
Jian-Yi Wu ◽  
...  

Most object detection models cannot achieve satisfactory performance under nighttime and other insufficient illumination conditions, which may be due to the collection of data sets and typical labeling conventions. Public data sets collected for object detection are usually photographed with sufficient ambient lighting. However, their labeling conventions typically focus on clear objects and ignore blurry and occluded objects. Consequently, the detection performance levels of traditional vehicle detection techniques are limited in nighttime environments without sufficient illumination. When objects occupy a small number of pixels and the existence of crucial features is infrequent, traditional convolutional neural networks (CNNs) may suffer from serious information loss due to the fixed number of convolutional operations. This study presents solutions for data collection and the labeling convention of nighttime data to handle various types of situations, including in-vehicle detection. Moreover, the study proposes a specifically optimized system based on the Faster region-based CNN model. The system has a processing speed of 16 frames per second for 500 × 375-pixel images, and it achieved a mean average precision (mAP) of 0.8497 in our validation segment involving urban nighttime and extremely inadequate lighting conditions. The experimental results demonstrated that our proposed methods can achieve high detection performance in various nighttime environments, such as urban nighttime conditions with insufficient illumination, and extremely dark conditions with nearly no lighting. The proposed system outperforms original methods that have an mAP value of approximately 0.2.


2019 ◽  
Vol 12 (4) ◽  
pp. 257-273
Author(s):  
Nastaran Yaghoobi Ershadi ◽  
José Manuel Menéndez ◽  
David Jiménez Bermejo

2020 ◽  
Vol 12 (3) ◽  
pp. 575 ◽  
Author(s):  
Yohei Koga ◽  
Hiroyuki Miyazaki ◽  
Ryosuke Shibasaki

Recently, object detectors based on deep learning have become widely used for vehicle detection and contributed to drastic improvement in performance measures. However, deep learning requires much training data, and detection performance notably degrades when the target area of vehicle detection (the target domain) is different from the training data (the source domain). To address this problem, we propose an unsupervised domain adaptation (DA) method that does not require labeled training data, and thus can maintain detection performance in the target domain at a low cost. We applied Correlation alignment (CORAL) DA and adversarial DA to our region-based vehicle detector and improved the detection accuracy by over 10% in the target domain. We further improved adversarial DA by utilizing the reconstruction loss to facilitate learning semantic features. Our proposed method achieved slightly better performance than the accuracy achieved with the labeled training data of the target domain. We demonstrated that our improved DA method could achieve almost the same level of accuracy at a lower cost than non-DA methods with a sufficient amount of labeled training data of the target domain.


CICTP 2018 ◽  
2018 ◽  
Author(s):  
Xuejin Wan ◽  
Shangfo Huang ◽  
Bowen Du ◽  
Rui Sun ◽  
Jiong Wang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document