Performance data and limiting conditions of object tracking algorithms in the presence of camera noise

1988 ◽  
Vol 135 (1) ◽  
pp. 111 ◽  
Author(s):  
H. Siebert ◽  
W. Engelhardt
2000 ◽  
Author(s):  
Todd Schoepflin ◽  
Christopher Lau ◽  
Rohit Garg ◽  
Donglok Kim ◽  
Yongmin Kim

Author(s):  
Lipeng Gu ◽  
Shaoyuan Sun ◽  
Xunhua Liu ◽  
Xiang Li

Abstract Compared with 2D multi-object tracking algorithms, 3D multi-object tracking algorithms have more research significance and broad application prospects in the unmanned vehicles research field. Aiming at the problem of 3D multi-object detection and tracking, in this paper, the multi-object tracker CenterTrack, which focuses on 2D multi-object tracking task while ignoring object 3D information, is improved mainly from two aspects of detection and tracking, and the improved network is called CenterTrack3D. In terms of detection, CenterTrack3D uses the idea of attention mechanism to optimize the way that the previous-frame image and the heatmap of previous-frame tracklets are added to the current-frame image as input, and second convolutional layer of the output head is replaced by dynamic convolution layer, which further improves the ability to detect occluded objects. In terms of tracking, a cascaded data association algorithm based on 3D Kalman filter is proposed to make full use of the 3D information of objects in the image and increase the robustness of the 3D multi-object tracker. The experimental results show that, compared with the original CenterTrack and the existing 3D multi-object tracking methods, CenterTrack3D achieves 88.75% MOTA for cars and 59.40% MOTA for pedestrians and is very competitive on the KITTI tracking benchmark test set.


2019 ◽  
Vol 9 (16) ◽  
pp. 3336 ◽  
Author(s):  
Tzu-Wei Mi ◽  
Mau-Tsuen Yang

With the availability of 360-degree cameras, 360-degree videos have become popular recently. To attach a virtual tag on a physical object in 360-degree videos for augmented reality applications, automatic object tracking is required so the virtual tag can follow its corresponding physical object in 360-degree videos. Relative to ordinary videos, 360-degree videos in an equirectangular format have special characteristics such as viewpoint change, occlusion, deformation, lighting change, scale change, and camera shakiness. Tracking algorithms designed for ordinary videos may not work well on 360-degree videos. Therefore, we thoroughly evaluate the performance of eight modern trackers in terms of accuracy and speed on 360-degree videos. The pros and cons of these trackers on 360-degree videos are discussed. Possible improvements to adapt these trackers to 360-degree videos are also suggested. Finally, we provide a dataset containing nine 360-degree videos with ground truth of target positions as a benchmark for future research.


Author(s):  
Abdelhai Lati ◽  
Haitham Mekhermeche ◽  
Belkheir Benhellal ◽  
Mahmoud Belhocine ◽  
Nouara Achour

2018 ◽  
Vol 2018 ◽  
pp. 1-14 ◽  
Author(s):  
Jianming Zhang ◽  
You Wu ◽  
Xiaokang Jin ◽  
Feng Li ◽  
Jin Wang

Object tracking is a vital topic in computer vision. Although tracking algorithms have gained great development in recent years, its robustness and accuracy still need to be improved. In this paper, to overcome single feature with poor representation ability in a complex image sequence, we put forward a multifeature integration framework, including the gray features, Histogram of Gradient (HOG), color-naming (CN), and Illumination Invariant Features (IIF), which effectively improve the robustness of object tracking. In addition, we propose a model updating strategy and introduce a skewness to measure the confidence degree of tracking result. Unlike previous tracking algorithms, we judge the relationship of skewness values between two adjacent frames to decide the updating of target appearance model to use a dynamic learning rate. This way makes our tracker further improve the robustness of tracking and effectively prevents the target drifting caused by occlusion and deformation. Extensive experiments on large-scale benchmark containing 50 image sequences show that our tracker is better than most existing excellent trackers in tracking performance and can run at average speed over 43 fps.


Sign in / Sign up

Export Citation Format

Share Document