scholarly journals Object Detection in Deep Surveillance

Author(s):  
Narina Thakur ◽  
Preeti Nagrath ◽  
Rachna Jain ◽  
Dharmender Saini ◽  
Nitika Sharma ◽  
...  

Abstract Object detection is a key ability required by most computer visions and surveillance applications. Pedestrian detection is a key problem in surveillance, with several applications such as person identification, person count and tracking. The number of techniques to identifying pedestrians in images has gradually increased in recent years, even with the significant advances in the state-of-the-art deep neural network-based framework for object detection models. The research in the field of object detection and image classification has made a stride in the level of accuracy greater than 99% and the level of granularity. A powerful Object detector, specifically designed for high-end surveillance applications, is needed that will not only position the bounding box and label it but will also return their relative positions. The size of these bounding boxes can vary depending on the object and it interacts with the physical world. To address these requirements, an extensive evaluation of the state-of-the-art algorithms has been performed in this paper. The work presented in this paper performs detections on MOT20 dataset using various algorithms and testing on a custom dataset recorded in our organization premises using an Unmanned Aerial Vehicle (UAV). The experimental analysis has been performed on Faster-RCNN, SSD and YOLO models. The Yolov5 model is found to outperform all the other models with 61% precision and 44% of F measure value.

2020 ◽  
Vol 34 (07) ◽  
pp. 10478-10485 ◽  
Author(s):  
Yingjie Cai ◽  
Buyu Li ◽  
Zeyu Jiao ◽  
Hongsheng Li ◽  
Xingyu Zeng ◽  
...  

Monocular 3D object detection task aims to predict the 3D bounding boxes of objects based on monocular RGB images. Since the location recovery in 3D space is quite difficult on account of absence of depth information, this paper proposes a novel unified framework which decomposes the detection problem into a structured polygon prediction task and a depth recovery task. Different from the widely studied 2D bounding boxes, the proposed novel structured polygon in the 2D image consists of several projected surfaces of the target object. Compared to the widely-used 3D bounding box proposals, it is shown to be a better representation for 3D detection. In order to inversely project the predicted 2D structured polygon to a cuboid in the 3D physical world, the following depth recovery task uses the object height prior to complete the inverse projection transformation with the given camera projection matrix. Moreover, a fine-grained 3D box refinement scheme is proposed to further rectify the 3D detection results. Experiments are conducted on the challenging KITTI benchmark, in which our method achieves state-of-the-art detection accuracy.


2020 ◽  
Vol 34 (4) ◽  
pp. 571-584
Author(s):  
Rajarshi Biswas ◽  
Michael Barz ◽  
Daniel Sonntag

AbstractImage captioning is a challenging multimodal task. Significant improvements could be obtained by deep learning. Yet, captions generated by humans are still considered better, which makes it an interesting application for interactive machine learning and explainable artificial intelligence methods. In this work, we aim at improving the performance and explainability of the state-of-the-art method Show, Attend and Tell by augmenting their attention mechanism using additional bottom-up features. We compute visual attention on the joint embedding space formed by the union of high-level features and the low-level features obtained from the object specific salient regions of the input image. We embed the content of bounding boxes from a pre-trained Mask R-CNN model. This delivers state-of-the-art performance, while it provides explanatory features. Further, we discuss how interactive model improvement can be realized through re-ranking caption candidates using beam search decoders and explanatory features. We show that interactive re-ranking of beam search candidates has the potential to outperform the state-of-the-art in image captioning.


2020 ◽  
Vol 11 ◽  
Author(s):  
Hao Lu ◽  
Zhiguo Cao

Plant counting runs through almost every stage of agricultural production from seed breeding, germination, cultivation, fertilization, pollination to yield estimation, and harvesting. With the prevalence of digital cameras, graphics processing units and deep learning-based computer vision technology, plant counting has gradually shifted from traditional manual observation to vision-based automated solutions. One of popular solutions is a state-of-the-art object detection technique called Faster R-CNN where plant counts can be estimated from the number of bounding boxes detected. It has become a standard configuration for many plant counting systems in plant phenotyping. Faster R-CNN, however, is expensive in computation, particularly when dealing with high-resolution images. Unfortunately high-resolution imagery is frequently used in modern plant phenotyping platforms such as unmanned aerial vehicles, engendering inefficient image analysis. Such inefficiency largely limits the throughput of a phenotyping system. The goal of this work hence is to provide an effective and efficient tool for high-throughput plant counting from high-resolution RGB imagery. In contrast to conventional object detection, we encourage another promising paradigm termed object counting where plant counts are directly regressed from images, without detecting bounding boxes. In this work, by profiling the computational bottleneck, we implement a fast version of a state-of-the-art plant counting model TasselNetV2 with several minor yet effective modifications. We also provide insights why these modifications make sense. This fast version, TasselNetV2+, runs an order of magnitude faster than TasselNetV2, achieving around 30 fps on image resolution of 1980 × 1080, while it still retains the same level of counting accuracy. We validate its effectiveness on three plant counting tasks, including wheat ears counting, maize tassels counting, and sorghum heads counting. To encourage the use of this tool, our implementation has been made available online at https://tinyurl.com/TasselNetV2plus.


2021 ◽  
Vol 11 (13) ◽  
pp. 6025
Author(s):  
Han Xie ◽  
Wenqi Zheng ◽  
Hyunchul Shin

Although many deep-learning-based methods have achieved considerable detection performance for pedestrians with high visibility, their overall performances are still far from satisfactory, especially when heavily occluded instances are included. In this research, we have developed a novel pedestrian detector using a deformable attention-guided network (DAGN). Considering that pedestrians may be deformed with occlusions or under diverse poses, we have designed a deformable convolution with an attention module (DCAM) to sample from non-rigid locations, and obtained the attention feature map by aggregating global context information. Furthermore, the loss function was optimized to get accurate detection bounding boxes, by adopting complete-IoU loss for regression, and the distance IoU-NMS was used to refine the predicted boxes. Finally, a preprocessing technique based on tone mapping was applied to cope with the low visibility cases due to poor illumination. Extensive evaluations were conducted on three popular traffic datasets. Our method could decrease the log-average miss rate (MR−2) by 12.44% and 7.8%, respectively, for the heavy occlusion and overall cases, when compared to the published state-of-the-art results of the Caltech pedestrian dataset. Of the CityPersons and EuroCity Persons datasets, our proposed method outperformed the current best results by about 5% in MR−2 for the heavy occlusion cases.


Author(s):  
Xiao Ling ◽  
Sameer Singh ◽  
Daniel S. Weld

Recent research on entity linking (EL) has introduced a plethora of promising techniques, ranging from deep neural networks to joint inference. But despite numerous papers there is surprisingly little understanding of the state of the art in EL. We attack this confusion by analyzing differences between several versions of the EL problem and presenting a simple yet effective, modular, unsupervised system, called Vinculum, for entity linking. We conduct an extensive evaluation on nine data sets, comparing Vinculum with two state-of-the-art systems, and elucidate key aspects of the system that include mention extraction, candidate generation, entity type prediction, entity coreference, and coherence.


2018 ◽  
Vol 232 ◽  
pp. 04036
Author(s):  
Jun Yin ◽  
Huadong Pan ◽  
Hui Su ◽  
Zhonggeng Liu ◽  
Zhirong Peng

We propose an object detection method that predicts the orientation bounding boxes (OBB) to estimate objects locations, scales and orientations based on YOLO (You Only Look Once), which is one of the top detection algorithms performing well both in accuracy and speed. Horizontal bounding boxes(HBB), which are not robust to orientation variances, are used in the existing object detection methods to detect targets. The proposed orientation invariant YOLO (OIYOLO) detector can effectively deal with the bird’s eye viewpoint images where the orientation angles of the objects are arbitrary. In order to estimate the rotated angle of objects, we design a new angle loss function. Therefore, the training of OIYOLO forces the network to learn the annotated orientation angle of objects, making OIYOLO orientation invariances. The proposed approach that predicts OBB can be applied in other detection frameworks. In additional, to evaluate the proposed OIYOLO detector, we create an UAV-DAHUA datasets that annotated with objects locations, scales and orientation angles accurately. Extensive experiments conducted on UAV-DAHUA and DOTA datasets demonstrate that OIYOLO achieves state-of-the-art detection performance with high efficiency comparing with the baseline YOLO algorithms.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4184
Author(s):  
Zhiwei Cao ◽  
Huihua Yang ◽  
Juan Zhao ◽  
Shuhong Guo ◽  
Lingqiao Li

Multispectral pedestrian detection, which consists of a color stream and thermal stream, is essential under conditions of insufficient illumination because the fusion of the two streams can provide complementary information for detecting pedestrians based on deep convolutional neural networks (CNNs). In this paper, we introduced and adapted a simple and efficient one-stage YOLOv4 to replace the current state-of-the-art two-stage fast-RCNN for multispectral pedestrian detection and to directly predict bounding boxes with confidence scores. To further improve the detection performance, we analyzed the existing multispectral fusion methods and proposed a novel multispectral channel feature fusion (MCFF) module for integrating the features from the color and thermal streams according to the illumination conditions. Moreover, several fusion architectures, such as Early Fusion, Halfway Fusion, Late Fusion, and Direct Fusion, were carefully designed based on the MCFF to transfer the feature information from the bottom to the top at different stages. Finally, the experimental results on the KAIST and Utokyo pedestrian benchmarks showed that Halfway Fusion was used to obtain the best performance of all architectures and the MCFF could adapt fused features in the two modalities. The log-average miss rate (MR) for the two modalities with reasonable settings were 4.91% and 23.14%, respectively.


2021 ◽  
Author(s):  
Lingfei Ma ◽  
Ying Li ◽  
Jonathan Li ◽  
Cheng Wang ◽  
Ruisheng Wang ◽  
...  

The mobile laser scanning (MLS) technique has attracted considerable attention for providing high-density, high-accuracy, unstructured, three-dimensional (3D) geo-referenced point-cloud coverage of the road environment. Recently, there has been an increasing number of applications of MLS in the detection and extraction of urban objects. This paper presents a systematic review of existing MLS related literature. This paper consists of three parts. Part 1 presents a brief overview of the state-of-the-art commercial MLS systems. Part 2 provides a detailed analysis of on-road and off-road information inventory methods, including the detection and extraction of on-road objects (e.g., road surface, road markings, driving lines, and road crack) and off-road objects (e.g., pole-like objects and power lines). Part 3 presents a refined integrated analysis of challenges and future trends. Our review shows that MLS technology is well proven in urban object detection and extraction, since the improvement of hardware and software accelerate the efficiency and accuracy of data collection and processing. When compared to other review papers focusing on MLS applications, we review the state-of-the-art road object detection and extraction methods using MLS data and discuss their performance and applicability. The main contribution of this review demonstrates that the MLS systems are suitable for supporting road asset inventory, ITS-related applications, high-definition maps, and other highly accurate localization services.


2020 ◽  
Vol 34 (07) ◽  
pp. 12959-12966
Author(s):  
Pengyu Zhao ◽  
Ansheng You ◽  
Yuanxing Zhang ◽  
Jiaying Liu ◽  
Kaigui Bian ◽  
...  

With the advance of omnidirectional panoramic technology, 360◦ imagery has become increasingly popular in the past few years. To better understand the 360◦ content, many works resort to the 360◦ object detection and various criteria have been proposed to bound the objects and compute the intersection-over-union (IoU) between bounding boxes based on the common equirectangular projection (ERP) or perspective projection (PSP). However, the existing 360◦ criteria are either inaccurate or inefficient for real-world scenarios. In this paper, we introduce a novel spherical criteria for fast and accurate 360◦ object detection, including both spherical bounding boxes and spherical IoU (SphIoU). Based on the spherical criteria, we propose a novel two-stage 360◦ detector, i.e., Reprojection R-CNN, by combining the advantages of both ERP and PSP, yielding efficient and accurate 360◦ object detection. To validate the design of spherical criteria and Reprojection R-CNN, we construct two unbiased synthetic datasets for training and evaluation. Experimental results reveal that compared with the existing criteria, the two-stage detector with spherical criteria achieves the best mAP results under the same inference speed, demonstrating that the spherical criteria can be more suitable for 360◦ object detection. Moreover, Reprojection R-CNN outperforms the previous state-of-the-art methods by over 30% on mAP with competitive speed, which confirms the efficiency and accuracy of the design.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Joel Gustafsson ◽  
Peter Norberg ◽  
Jan R. Qvick-Wester ◽  
Alexander Schliep

Abstract Background Alignment-free methods are a popular approach for comparing biological sequences, including complete genomes. The methods range from probability distributions of sequence composition to first and higher-order Markov chains, where a k-th order Markov chain over DNA has $$4^k$$ 4 k formal parameters. To circumvent this exponential growth in parameters, variable-length Markov chains (VLMCs) have gained popularity for applications in molecular biology and other areas. VLMCs adapt the depth depending on sequence context and thus curtail excesses in the number of parameters. The scarcity of available fast, or even parallel software tools, prompted the development of a parallel implementation using lazy suffix trees and a hash-based alternative. Results An extensive evaluation was performed on genomes ranging from 12Mbp to 22Gbp. Relevant learning parameters were chosen guided by the Bayesian Information Criterion (BIC) to avoid over-fitting. Our implementation greatly improves upon the state-of-the-art even in serial execution. It exhibits very good parallel scaling with speed-ups for long sequences close to the optimum indicated by Amdahl’s law of 3 for 4 threads and about 6 for 16 threads, respectively. Conclusions Our parallel implementation released as open-source under the GPLv3 license provides a practically useful alternative to the state-of-the-art which allows the construction of VLMCs even for very large genomes significantly faster than previously possible. Additionally, our parameter selection based on BIC gives guidance to end-users comparing genomes.


Sign in / Sign up

Export Citation Format

Share Document