scholarly journals Moving Object Detection and Tracking with Doppler LiDAR

2019 ◽  
Vol 11 (10) ◽  
pp. 1154 ◽  
Author(s):  
Yuchi Ma ◽  
John Anderson ◽  
Stephen Crouch ◽  
Jie Shan

In this paper, we present a model-free detection-based tracking approach for detecting and tracking moving objects in street scenes from point clouds obtained via a Doppler LiDAR that can not only collect spatial information (e.g., point clouds) but also Doppler images by using Doppler-shifted frequencies. Using our approach, Doppler images are used to detect moving points and determine the number of moving objects followed by complete segmentations via a region growing technique. The tracking approach is based on Multiple Hypothesis Tracking (MHT) with two extensions. One is that a point cloud descriptor, Oriented Ensemble of Shape Function (OESF), is proposed to evaluate the structure similarity when doing object-to-track association. Another is to use Doppler images to improve the estimation of dynamic state of moving objects. The quantitative evaluation of detection and tracking results on different datasets shows the advantages of Doppler LiDAR and the effectiveness of our approach.

Author(s):  
Y. Cong ◽  
C. Chen ◽  
J. Li ◽  
W. Wu ◽  
S. Li ◽  
...  

Abstract. Detection And Tracking of Moving Objects (DATMO) is essential and necessary for mobile mapping system to generate clean and accurate point clouds maps since dynamic targets in real-world scenarios will deteriorate the performance of whole system. In this research, a robust LiDAR-SLAM system is presented incorporated with a real-time dynamic objects removal module to improve the accuracy of 6 DOF pose estimation and precision of maps. The key idea of the proposed method is to efficiently cluster the sparse point clouds of moving objects and then track them independently so as to relieve their influence on the odometry and mapping results. In the back-end, in order to further refine the point clouds maps, a valid probabilistic map fusion method is performed based on the free-space theory. We have evaluated our system on the dataset collected from daily crowded environments full of moving objects, providing competitive results with the state-of-the-art system both on the pose estimation and point cloud mapping.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 230
Author(s):  
Xiangwei Dang ◽  
Zheng Rong ◽  
Xingdong Liang

Accurate localization and reliable mapping is essential for autonomous navigation of robots. As one of the core technologies for autonomous navigation, Simultaneous Localization and Mapping (SLAM) has attracted widespread attention in recent decades. Based on vision or LiDAR sensors, great efforts have been devoted to achieving real-time SLAM that can support a robot’s state estimation. However, most of the mature SLAM methods generally work under the assumption that the environment is static, while in dynamic environments they will yield degenerate performance or even fail. In this paper, first we quantitatively evaluate the performance of the state-of-the-art LiDAR-based SLAMs taking into account different pattens of moving objects in the environment. Through semi-physical simulation, we observed that the shape, size, and distribution of moving objects all can impact the performance of SLAM significantly, and obtained instructive investigation results by quantitative comparison between LOAM and LeGO-LOAM. Secondly, based on the above investigation, a novel approach named EMO to eliminating the moving objects for SLAM fusing LiDAR and mmW-radar is proposed, towards improving the accuracy and robustness of state estimation. The method fully uses the advantages of different characteristics of two sensors to realize the fusion of sensor information with two different resolutions. The moving objects can be efficiently detected based on Doppler effect by radar, accurately segmented and localized by LiDAR, then filtered out from the point clouds through data association and accurate synchronized in time and space. Finally, the point clouds representing the static environment are used as the input of SLAM. The proposed approach is evaluated through experiments using both semi-physical simulation and real-world datasets. The results demonstrate the effectiveness of the method at improving SLAM performance in accuracy (decrease by 30% at least in absolute position error) and robustness in dynamic environments.


2014 ◽  
Vol 533 ◽  
pp. 218-225 ◽  
Author(s):  
Rapee Krerngkamjornkit ◽  
Milan Simic

This paper describes computer vision algorithms for detection, identification, and tracking of moving objects in a video file. The problem of multiple object tracking can be divided into two parts; detecting moving objects in each frame and associating the detections corresponding to the same object over time. The detection of moving objects uses a background subtraction algorithm based on Gaussian mixture models. The motion of each track is estimated by a Kalman filter. The video tracking algorithm was successfully tested using the BIWI walking pedestrians datasets [. The experimental results show that system can operate in real time and successfully detect, track and identify multiple targets in the presence of partial occlusion.


2015 ◽  
Vol 734 ◽  
pp. 203-206
Author(s):  
En Zeng Dong ◽  
Sheng Xu Yan ◽  
Kui Xiang Wei

In order to enhance the rapidity and the accuracy of moving target detection and tracking, and improve the speed of the algorithm on the DSP (digital signal processor), an active visual tracking system was designed based on the gaussian mixture background model and Meanshift algorithm on DM6437. The system use the VLIB library developed by TI, and through the method of gaussian mixture background model to detect the moving objects and use the Meanshift tracking algorithm based on color features to track the target in RGB space. Finally, the system is tested on the hardware platform, and the system is verified to be quickness and accuracy.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3347 ◽  
Author(s):  
Zhishuang Yang ◽  
Bo Tan ◽  
Huikun Pei ◽  
Wanshou Jiang

The classification of point clouds is a basic task in airborne laser scanning (ALS) point cloud processing. It is quite a challenge when facing complex observed scenes and irregular point distributions. In order to reduce the computational burden of the point-based classification method and improve the classification accuracy, we present a segmentation and multi-scale convolutional neural network-based classification method. Firstly, a three-step region-growing segmentation method was proposed to reduce both under-segmentation and over-segmentation. Then, a feature image generation method was used to transform the 3D neighborhood features of a point into a 2D image. Finally, feature images were treated as the input of a multi-scale convolutional neural network for training and testing tasks. In order to obtain performance comparisons with existing approaches, we evaluated our framework using the International Society for Photogrammetry and Remote Sensing Working Groups II/4 (ISPRS WG II/4) 3D labeling benchmark tests. The experiment result, which achieved 84.9% overall accuracy and 69.2% of average F1 scores, has a satisfactory performance over all participating approaches analyzed.


Author(s):  
M. Corongiu ◽  
A. Masiero ◽  
G. Tucci

Abstract. Nowadays, mobile mapping systems are widely used to quickly collect reliable geospatial information of relatively large areas: thanks to such characteristics, the number of applications and fields exploiting their usage is continuously increasing. Among such possible applications, mobile mapping systems have been recently considered also by railway system managers to quickly produce and update a database of the geospatial features of such system, also called assets. Despite several vehicles, devices and acquisition methods can be considered for the data collection of the railway system, the predominant one is probably that based on the use of a mobile mapping system mounted on a train, which moves all along the railway tracks, enabling the 3D reproduction of the entire railway track area.Given the large amount of data collected by such mobile mapping, automatic procedures have to be used to speed up the process of extracting the spatial information of interest, i.e. assets positions and characteristics.This paper considers the problem of extracting such information for what concerns cantilever and portal masts, by exploiting a mixed approach. First, a set of candidate areas are extracted and pre-processed by considering certain of their geometric characteristics, mainly extracted by using eigenvalues of the covariance matrix of a point neighborhood. Then, a 3D modified Fisher vector-deep learning neural net is used to classify the candidates. Tests on such approach are conducted in two areas of the Italian railway system.


Informatics ◽  
2021 ◽  
Vol 18 (1) ◽  
pp. 43-60
Author(s):  
R. P. Bohush ◽  
S. V. Ablameyko

One of the promising areas of development and implementation of artificial intelligence is the automatic detection and tracking of moving objects in video sequence. The paper presents a formalization of the detection and tracking of one and many objects in video. The following metrics are considered: the quality of detection of tracked objects, the accuracy of determining the location of the object in a frame, the trajectory of movement, the accuracy of tracking multiple objects. Based on the considered generalization, an algorithm for tracking people has been developed that uses the tracking through detection method and convolutional neural networks to detect people and form features. Neural network features are included in a composite descriptor that also contains geometric and color features to describe each detected person in the frame. The results of experiments based on the considered criteria are presented, and it is experimentally confirmed that the improvement of the detector operation makes it possible to increase the accuracy of tracking objects. Examples of frames of processed video sequences with visualization of human movement trajectories are presented.


With the advent in technology, security and authentication has become the main aspect in computer vision approach. Moving object detection is an efficient system with the goal of preserving the perceptible and principal source in a group. Surveillance is one of the most crucial requirements and carried out to monitor various kinds of activities. The detection and tracking of moving objects are the fundamental concept that comes under the surveillance systems. Moving object recognition is challenging approach in the field of digital image processing. Moving object detection relies on few of the applications which are Human Machine Interaction (HMI), Safety and video Surveillance, Augmented Realism, Transportation Monitoring on Roads, Medical Imaging etc. The main goal of this research is the detection and tracking moving object. In proposed approach, based on the pre-processing method in which there is extraction of the frames with reduction of dimension. It applies the morphological methods to clean the foreground image in the moving objects and texture based feature extract using component analysis method. After that, design a novel method which is optimized multilayer perceptron neural network. It used the optimized layers based on the Pbest and Gbest particle position in the objects. It finds the fitness values which is binary values (x_update, y_update) of swarm or object positions. Method and output achieved final frame creation of the moving objects in the video using BLOB ANALYSER In this research , an application is designed using MATLAB VERSION 2016a In activation function to re-filter the given input and final output calculated with the help of pre-defined sigmoid. In proposed methods to find the clear detection and tracking in the given dataset MOT, FOOTBALL, INDOOR and OUTDOOR datasets. To improve the detection accuracy rate, recall rate and reduce the error rates, False Positive and Negative rate and compare with the various classifiers such as KNN, MLPNN and J48 decision Tree.


Sign in / Sign up

Export Citation Format

Share Document