scholarly journals A Deep Pedestrian Tracking SSD-Based Model in the Sudden Emergency or Violent Environment

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Zhihong Li ◽  
Yang Dong ◽  
Yanjie Wen ◽  
Han Xu ◽  
Jiahao Wu

Public security monitoring is a hot issue that the government and citizens pay close attention to. Multiobject tracking plays an important role in solving many problems for public security. Under crowded scenarios and emergency places, it is a challenging problem to predict and warn owing to the complexity of crowd intersection. There are still many deficiencies in the research of multiobject trajectory prediction, which mostly employ object detection and data association. Compared with the tremendous progress in object detection, data association still relied on hand-crafted constraints such as group, motion, and spatial proximity. Emergencies usually have the characteristics of mutation, target diversification, low illumination, or resolution, which makes multitarget tracking more difficult. In this paper, we harness the advance of the deep learning framework for data association in object tracking by jointly modeling pedestrian features. The proposed deep pedestrian tracking SSD-based model can pair and link pedestrian features in any two frames. The model was trained with open dataset, and the results, accuracy, and speed of the model were compared between normal and emergency or violent environment. The experimental results show that the tracking accuracy of mAP is higher than 95% both in normal and abnormal data sets and higher than that of the traditional detection algorithm. The detection speed of the normal data set is slightly higher than that of the abnormal data set. In general, the model has good tracking results and credibility for multitarget tracking in emergency environment. The research provides technical support for safety assurance and behavior monitoring in emergency environment.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2894
Author(s):  
Minh-Quan Dao ◽  
Vincent Frémont

Multi-Object Tracking (MOT) is an integral part of any autonomous driving pipelines because it produces trajectories of other moving objects in the scene and predicts their future motion. Thanks to the recent advances in 3D object detection enabled by deep learning, track-by-detection has become the dominant paradigm in 3D MOT. In this paradigm, a MOT system is essentially made of an object detector and a data association algorithm which establishes track-to-detection correspondence. While 3D object detection has been actively researched, association algorithms for 3D MOT has settled at bipartite matching formulated as a Linear Assignment Problem (LAP) and solved by the Hungarian algorithm. In this paper, we adapt a two-stage data association method which was successfully applied to image-based tracking to the 3D setting, thus providing an alternative for data association for 3D MOT. Our method outperforms the baseline using one-stage bipartite matching for data association by achieving 0.587 Average Multi-Object Tracking Accuracy (AMOTA) in NuScenes validation set and 0.365 AMOTA (at level 2) in Waymo test set.



2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Lee Ming Jun Melvin ◽  
Rajesh Elara Mohan ◽  
Archana Semwal ◽  
Povendhan Palanisamy ◽  
Karthikeyan Elangovan ◽  
...  

AbstractDrain blockage is a crucial problem in the urban environment. It heavily affects the ecosystem and human health. Hence, routine drain inspection is essential for urban environment. Manual drain inspection is a tedious task and prone to accidents and water-borne diseases. This work presents a drain inspection framework using convolutional neural network (CNN) based object detection algorithm and in house developed reconfigurable teleoperated robot called ‘Raptor’. The CNN based object detection model was trained using a transfer learning scheme with our custom drain-blocking objects data-set. The efficiency of the trained CNN algorithm and drain inspection robot Raptor was evaluated through various real-time drain inspection field trial. The experimental results indicate that our trained object detection algorithm has detect and classified the drain blocking objects with 91.42% accuracy for both offline and online test images and is able to process 18 frames per second (FPS). Further, the maneuverability of the robot was evaluated from various open and closed drain environment. The field trial results ensure that the robot maneuverability was stable, and its mapping and localization is also accurate in a complex drain environment.



2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Sheng Ren ◽  
Jianqi Li ◽  
Tianyi Tu ◽  
Yibo Peng ◽  
Jian Jiang

Video surveillance plays an increasingly important role in public security and is a technical foundation for constructing safe and smart cities. The traditional video surveillance systems can only provide real-time monitoring or manually analyze cases by reviewing the surveillance video. So, it is difficult to use the data sampled from the surveillance video effectively. In this paper, we proposed an efficient video detection object super-resolution with a deep fusion network for public security. Firstly, we designed a super-resolution framework for video detection objects. By fusing object detection algorithms, video keyframe selection algorithms, and super-resolution reconstruction algorithms, we proposed a deep learning-based intelligent video detection object super-resolution (SR) method. Secondly, we designed a regression-based object detection algorithm and a key video frame selection algorithm. The object detection algorithm is used to assist police and security personnel to track suspicious objects in real time. The keyframe selection algorithm can select key information from a large amount of redundant information, which helps to improve the efficiency of video content analysis and reduce labor costs. Finally, we designed an asymmetric depth recursive back-projection network for super-resolution reconstruction. By combining the advantages of the pixel-based super-resolution algorithm and the feature space-based super-resolution algorithm, we improved the resolution and the visual perception clarity of the key objects. Extensive experimental evaluations show the efficiency and effectiveness of our method.



2019 ◽  
Vol 16 (3) ◽  
pp. 172988141984299 ◽  
Author(s):  
Dongfang Yang ◽  
Xing Liu ◽  
Hao He ◽  
Yongfei Li

Detecting objects on unmanned aerial vehicles is a hard task, due to the long visual distance and the subsequent small size and lack of view. Besides, the traditional ground observation manners based on visible light camera are sensitive to brightness. This article aims to improve the target detection accuracy in various weather conditions, by using both visible light camera and infrared camera simultaneously. In this article, an association network of multimodal feature maps on the same scene is used to design an object detection algorithm, which is the so-called feature association learning method. In addition, this article collects a new cross-modal detection data set and proposes a cross-modal object detection algorithm based on visible light and infrared observations. The experimental results show that the algorithm improves the detection accuracy of small objects in the air-to-ground view. The multimodal joint detection network can overcome the influence of illumination in different weather conditions, which provides a new detection means and ideas for the space-based unmanned platform to the small object detection task.



Author(s):  
Samuel Humphries ◽  
Trevor Parker ◽  
Bryan Jonas ◽  
Bryan Adams ◽  
Nicholas J Clark

Quick identification of building and roads is critical for execution of tactical US military operations in an urban environment. To this end, a gridded, referenced, satellite images of an objective, often referred to as a gridded reference graphic or GRG, has become a standard product developed during intelligence preparation of the environment. At present, operational units identify key infrastructure by hand through the work of individual intelligence officers. Recent advances in Convolutional Neural Networks, however, allows for this process to be streamlined through the use of object detection algorithms. In this paper, we describe an object detection algorithm designed to quickly identify and label both buildings and road intersections present in an image. Our work leverages both the U-Net architecture as well the SpaceNet data corpus to produce an algorithm that accurately identifies a large breadth of buildings and different types of roads. In addition to predicting buildings and roads, our model numerically labels each building by means of a contour finding algorithm. Most importantly, the dual U-Net model is capable of predicting buildings and roads on a diverse set of test images and using these predictions to produce clean GRGs.



Author(s):  
Louis Lecrosnier ◽  
Redouane Khemmar ◽  
Nicolas Ragot ◽  
Benoit Decoux ◽  
Romain Rossi ◽  
...  

This paper deals with the development of an Advanced Driver Assistance System (ADAS) for a smart electric wheelchair in order to improve the autonomy of disabled people. Our use case, built from a formal clinical study, is based on the detection, depth estimation, localization and tracking of objects in wheelchair’s indoor environment, namely: door and door handles. The aim of this work is to provide a perception layer to the wheelchair, enabling this way the detection of these keypoints in its immediate surrounding, and constructing of a short lifespan semantic map. Firstly, we present an adaptation of the YOLOv3 object detection algorithm to our use case. Then, we present our depth estimation approach using an Intel RealSense camera. Finally, as a third and last step of our approach, we present our 3D object tracking approach based on the SORT algorithm. In order to validate all the developments, we have carried out different experiments in a controlled indoor environment. Detection, distance estimation and object tracking are experimented using our own dataset, which includes doors and door handles.





Sign in / Sign up

Export Citation Format

Share Document