Visual Object Tracking Using Fast Three-Dimensional Discrete Cosine Transform

2014 ◽  
Vol 602-605 ◽  
pp. 1689-1692
Author(s):  
Cong Lin ◽  
Chi Man Pun

A novel visual object tracking method for color video stream based on traditional particle filter is proposed in this paper. Feature vectors are extracted from coefficient matrices of fast three-dimensional Discrete Cosine Transform (fast 3-D DCT). The feature, as experiment showed, is very robust to occlusion and rotation and it is not sensitive to scale changes. The proposed method is efficient enough to be used in a real-time application. The experiment was carried out on some common used datasets in literature. The results are satisfied and showed the estimated trace follows the target object very closely.

Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 387 ◽  
Author(s):  
Ming Du ◽  
Yan Ding ◽  
Xiuyun Meng ◽  
Hua-Liang Wei ◽  
Yifan Zhao

In recent years, regression trackers have drawn increasing attention in the visual-object tracking community due to their favorable performance and easy implementation. The tracker algorithms directly learn mapping from dense samples around the target object to Gaussian-like soft labels. However, in many real applications, when applied to test data, the extreme imbalanced distribution of training samples usually hinders the robustness and accuracy of regression trackers. In this paper, we propose a novel effective distractor-aware loss function to balance this issue by highlighting the significant domain and by severely penalizing the pure background. In addition, we introduce a full differentiable hierarchy-normalized concatenation connection to exploit abstractions across multiple convolutional layers. Extensive experiments were conducted on five challenging benchmark-tracking datasets, that is, OTB-13, OTB-15, TC-128, UAV-123, and VOT17. The experimental results are promising and show that the proposed tracker performs much better than nearly all the compared state-of-the-art approaches.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Suryo Adhi Wibowo ◽  
Hansoo Lee ◽  
Eun Kyeong Kim ◽  
Sungshin Kim

Histogram of oriented gradients (HOG) is a feature descriptor typically used for object detection. For object tracking, this feature has certain drawbacks when the target object is influenced by a change in motion or size. In this paper, the use of convolutional shallow features is proposed to improve the performance of HOG feature-based object tracking. Because the proposed method works based on a correlation filter, the response maps for each feature are summed in order to obtain the final response map. The location of the target object is then predicted based on the maximum value of the optimized final response map. Further, a model update is used to overcome the change in appearance of the target object during tracking. A performance evaluation of the proposed method is obtained by using Visual Object Tracking 2015 (VOT2015) benchmark dataset and its protocols. The results are then provided based on their accuracy-robustness (AR) rank. Furthermore, through a comparison with several state-of-the-art tracking algorithms, the proposed method was shown to achieve the highest rank in terms of accuracy and a third rank for robustness. In addition, the proposed method significantly improves the robustness of HOG-based features.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3513 ◽  
Author(s):  
Gang-Joon Yoon ◽  
Hyeong Hwang ◽  
Sang Yoon

Visual object tracking is a fundamental research area in the field of computer vision and pattern recognition because it can be utilized by various intelligent systems. However, visual object tracking faces various challenging issues because tracking is influenced by illumination change, pose change, partial occlusion and background clutter. Sparse representation-based appearance modeling and dictionary learning that optimize tracking history have been proposed as one possible solution to overcome the problems of visual object tracking. However, there are limitations in representing high dimensional descriptors using the standard sparse representation approach. Therefore, this study proposes a structured sparse principal component analysis to represent the complex appearance descriptors of the target object effectively with a linear combination of a small number of elementary atoms chosen from an over-complete dictionary. Using an online dictionary for learning and updating by selecting similar dictionaries that have high probability makes it possible to track the target object in a variety of environments. Qualitative and quantitative experimental results, including comparison to the current state of the art visual object tracking algorithms, validate that the proposed tracking algorithm performs favorably with changes in the target object and environment for benchmark video sequences.


2021 ◽  
pp. 59-65
Author(s):  
Mykola Moroz ◽  
Denys Berestov ◽  
Oleg Kurchenko

The article analyzes the latest achievements and decisions in the process of visual support of the target object in the field of computer vision, considers approaches to the choice of algorithm for visual support of objects on video sequences, highlights the main visual features that can be based on tracking object. The criteria that influence the choice of the target object-tracking algorithm in real time are defined. However, for real-time tracking with limited computing resources, the choice of the appropriate algorithm is crucial. The choice of visual tracking algorithm is also influenced by the requirements and limitations for the monitored objects and prior knowledge or assumptions about them. As a result of the analysis, the Staple tracking algorithm was preferred, according to the criterion of speed, which is a crucial indicator in the design and development of software and hardware for automated visual support of the object in real-time video stream for various surveillance and security systems, monitoring traffic, activity recognition and other embedded systems.


Author(s):  
Tianyang Xu ◽  
Zhenhua Feng ◽  
Xiao-Jun Wu ◽  
Josef Kittler

AbstractDiscriminative Correlation Filters (DCF) have been shown to achieve impressive performance in visual object tracking. However, existing DCF-based trackers rely heavily on learning regularised appearance models from invariant image feature representations. To further improve the performance of DCF in accuracy and provide a parsimonious model from the attribute perspective, we propose to gauge the relevance of multi-channel features for the purpose of channel selection. This is achieved by assessing the information conveyed by the features of each channel as a group, using an adaptive group elastic net inducing independent sparsity and temporal smoothness on the DCF solution. The robustness and stability of the learned appearance model are significantly enhanced by the proposed method as the process of channel selection performs implicit spatial regularisation. We use the augmented Lagrangian method to optimise the discriminative filters efficiently. The experimental results obtained on a number of well-known benchmarking datasets demonstrate the effectiveness and stability of the proposed method. A superior performance over the state-of-the-art trackers is achieved using less than $$10\%$$ 10 % deep feature channels.


2021 ◽  
Vol 434 ◽  
pp. 268-284
Author(s):  
Muxi Jiang ◽  
Rui Li ◽  
Qisheng Liu ◽  
Yingjing Shi ◽  
Esteban Tlelo-Cuautle

IEEE Access ◽  
2020 ◽  
pp. 1-1
Author(s):  
Ershen Wang ◽  
Donglei Wang ◽  
Yufeng Huang ◽  
Gang Tong ◽  
Song Xu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document