Analysis of visual object tracking algorithms for real-time systems

2021 ◽  
pp. 59-65
Author(s):  
Mykola Moroz ◽  
Denys Berestov ◽  
Oleg Kurchenko

The article analyzes the latest achievements and decisions in the process of visual support of the target object in the field of computer vision, considers approaches to the choice of algorithm for visual support of objects on video sequences, highlights the main visual features that can be based on tracking object. The criteria that influence the choice of the target object-tracking algorithm in real time are defined. However, for real-time tracking with limited computing resources, the choice of the appropriate algorithm is crucial. The choice of visual tracking algorithm is also influenced by the requirements and limitations for the monitored objects and prior knowledge or assumptions about them. As a result of the analysis, the Staple tracking algorithm was preferred, according to the criterion of speed, which is a crucial indicator in the design and development of software and hardware for automated visual support of the object in real-time video stream for various surveillance and security systems, monitoring traffic, activity recognition and other embedded systems.

2020 ◽  
Author(s):  
Dominika Przewlocka ◽  
Mateusz Wasala ◽  
Hubert Szolc ◽  
Krzysztof Blachut ◽  
Tomasz Kryjak

In this paper the research on optimisation of visual object tracking using a Siamese neural network for embedded vision systems is presented. It was assumed that the solution shall operate in real-time, preferably for a high resolution video stream, with the lowest possible energy consumption. To meet these requirements, techniques such as the reduction of computational precision and pruning were considered. Brevitas, a tool dedicated for optimisation and quantisation of neural networks for FPGA implementation, was used. A number of training scenarios were tested with varying levels of optimisations-from integer uniform quantisation with 16 bits to ternary and binary networks. Next, the influence of these optimisations on the tracking performance was evaluated. It was possible to reduce the size of the convolutional filters up to 10 times in relation to the original network. The obtained results indicate that using quantisation can significantly reduce the memory and computational complexity of the proposed network while still enabling precise tracking, thus allow to use it in embedded vision systems. Moreover , quantisation of weights positively affects the network training by decreasing overfitting.


2014 ◽  
Vol 602-605 ◽  
pp. 1689-1692
Author(s):  
Cong Lin ◽  
Chi Man Pun

A novel visual object tracking method for color video stream based on traditional particle filter is proposed in this paper. Feature vectors are extracted from coefficient matrices of fast three-dimensional Discrete Cosine Transform (fast 3-D DCT). The feature, as experiment showed, is very robust to occlusion and rotation and it is not sensitive to scale changes. The proposed method is efficient enough to be used in a real-time application. The experiment was carried out on some common used datasets in literature. The results are satisfied and showed the estimated trace follows the target object very closely.


2014 ◽  
Vol 666 ◽  
pp. 240-244
Author(s):  
M.C. Ang ◽  
Elankovan Sundararajan ◽  
K.W. Ng ◽  
Amirhossein Aghamohammadi ◽  
T.L. Lim

Object tracking plays important roles in various applications such as surveillance, search and rescue, augmented reality and robotics. This paper presents an investigation on multi-threading framework capability for color-based object tracking applications. A multi-threading framework based on Threading Building Blocks (TBB) was implemented on a multi-core system to enhance the image processing performance on a real-time visual object tracking algorithm. Intel Parallel Studio was used to implement this parallel framework. The performance between sequential and multi-threading framework was evaluated and compared. We demonstrated the multi-threading framework was approximately two times faster when compared to the sequential framework in our experiments.


2020 ◽  
Author(s):  
Dominika Przewlocka ◽  
Mateusz Wasala ◽  
Hubert Szolc ◽  
Krzysztof Blachut ◽  
Tomasz Kryjak

In this paper the research on optimisation of visual object tracking using a Siamese neural network for embedded vision systems is presented. It was assumed that the solution shall operate in real-time, preferably for a high resolution video stream, with the lowest possible energy consumption. To meet these requirements, techniques such as the reduction of computational precision and pruning were considered. Brevitas, a tool dedicated for optimisation and quantisation of neural networks for FPGA implementation, was used. A number of training scenarios were tested with varying levels of optimisations-from integer uniform quantisation with 16 bits to ternary and binary networks. Next, the influence of these optimisations on the tracking performance was evaluated. It was possible to reduce the size of the convolutional filters up to 10 times in relation to the original network. The obtained results indicate that using quantisation can significantly reduce the memory and computational complexity of the proposed network while still enabling precise tracking, thus allow to use it in embedded vision systems. Moreover , quantisation of weights positively affects the network training by decreasing overfitting.


2021 ◽  
Vol 434 ◽  
pp. 268-284
Author(s):  
Muxi Jiang ◽  
Rui Li ◽  
Qisheng Liu ◽  
Yingjing Shi ◽  
Esteban Tlelo-Cuautle

2018 ◽  
Vol 77 (17) ◽  
pp. 22131-22143 ◽  
Author(s):  
Longchao Yang ◽  
Peilin Jiang ◽  
Fei Wang ◽  
Xuan Wang

2014 ◽  
Vol 75 (4) ◽  
pp. 2393-2409 ◽  
Author(s):  
Zebin Cai ◽  
Zhenghui Gu ◽  
Zhu Liang Yu ◽  
Hao Liu ◽  
Ke Zhang

Proceedings ◽  
2020 ◽  
Vol 39 (1) ◽  
pp. 18
Author(s):  
Nenchoo ◽  
Tantrairatn

This paper presents an estimation of 3D UAV position in real-time condition by using Intel RealSense Depth camera D435i with visual object detection technique as a local positioning system for indoor environment. Nowadays, global positioning system or GPS is able to specify UAV position for outdoor environment. However, for indoor environment GPS hasn’t a capability to determine UAV position. Therefore, Depth stereo camera D435i is proposed to observe on ground to specify UAV position for indoor environment instead of GPS. Using deep learning for object detection to identify target object with depth camera to specifies 2D position of target object. In addition, depth position is estimated by stereo camera and target size. For experiment, Parrot Bebop2 as a target object is detected by using YOLOv3 as a real-time object detection system. However, trained Fully Convolutional Neural Networks (FCNNs) model is considerably significant for object detection, thus the model has been trained for bebop2 only. To conclude, this proposed system is able to specifies 3D position of bebop2 for indoor environment. For future work, this research will be developed and apply for visualized navigation control of drone swarm.


Sign in / Sign up

Export Citation Format

Share Document