scholarly journals Object Tracking in Hyperspectral-Oriented Video with Fast Spatial-Spectral Features

2021 ◽  
Vol 13 (10) ◽  
pp. 1922
Author(s):  
Lulu Chen ◽  
Yongqiang Zhao ◽  
Jiaxin Yao ◽  
Jiaxin Chen ◽  
Ning Li ◽  
...  

This paper presents a correlation filter object tracker based on fast spatial-spectral features (FSSF) to realize robust, real-time object tracking in hyperspectral surveillance video. Traditional object tracking in surveillance video based only on appearance information often fails in the presence of background clutter, low resolution, and appearance changes. Hyperspectral imaging uses unique spectral properties as well as spatial information to improve tracking accuracy in such challenging environments. However, the high-dimensionality of hyperspectral images causes high computational costs and difficulties for discriminative feature extraction. In FSSF, the real-time spatial-spectral convolution (RSSC) kernel is updated in real time in the Fourier transform domain without offline training to quickly extract discriminative spatial-spectral features. The spatial-spectral features are integrated into correlation filters to complete the hyperspectral tracking. To validate the proposed scheme, we collected a hyperspectral surveillance video (HSSV) dataset consisting of 70 sequences in 25 bands. Extensive experiments confirm the advantages and the efficiency of the proposed FSSF for object tracking in hyperspectral video tracking in challenging conditions of background clutter, low resolution, and appearance changes.

Author(s):  
Yuanping Zhang ◽  
Yuanyan Tang ◽  
Bin Fang ◽  
Zhaowei Shang

Many multi-object tracking methods have been proposed to solve the computer vision problem which has been attracting significant attentions because of the significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion. In this paper, hybrid deformable convolution neural networks with frame-pair input and deformable layers for multi-object tracking are presented. The object tracking method trained using two successive frames can predict the centers of searching windows as the locations of tracked targets to improve the accuracy and robustness of object tracking. Histogram of Oriented Gradient and CNN features are extracted as appearance features to measure similarities between objects. Kalman filter and Hungarian algorithm are used to create tracklets association which indicates the location and the trajectories of tracked targets. To solve the problem of object transformation, we construct a novel sampling strategy for off-line training with the idea of augmenting the special sampling locations in the convolution layers and pooling layers with additional offsets. Experiments on the popular challenging datasets show that the proposed tracking system performs on par with recently developed generic multi-object tracking methods, but effective for dense geometric transformation objects and with much less memory. In addition, the proposed tracking system can run in a speed of over 75 (24) fps with a GPU (CPU), much faster than most deep networks-based trackers.


Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2362 ◽  
Author(s):  
Yijin Yang ◽  
Yihong Zhang ◽  
Demin Li ◽  
Zhijie Wang

Correlation filter-based methods have recently performed remarkably well in terms of accuracy and speed in the visual object tracking research field. However, most existing correlation filter-based methods are not robust to significant appearance changes in the target, especially when the target undergoes deformation, illumination variation, and rotation. In this paper, a novel parallel correlation filters (PCF) framework is proposed for real-time visual object tracking. Firstly, the proposed method constructs two parallel correlation filters, one for tracking the appearance changes in the target, and the other for tracking the translation of the target. Secondly, through weighted merging the response maps of these two parallel correlation filters, the proposed method accurately locates the center position of the target. Finally, in the training stage, a new reasonable distribution of the correlation output is proposed to replace the original Gaussian distribution to train more accurate correlation filters, which can prevent the model from drifting to achieve excellent tracking performance. The extensive qualitative and quantitative experiments on the common object tracking benchmarks OTB-2013 and OTB-2015 have demonstrated that the proposed PCF tracker outperforms most of the state-of-the-art trackers and achieves a high real-time tracking performance.


Author(s):  
Daqun Li ◽  
Yi Yu ◽  
Xiaolin Chen

AbstractTo improve the deficient tracking ability of fully-convolutional Siamese networks (SiamFC) in complex scenes, an object tracking framework with Siamese network and re-detection mechanism (Siam-RM) is proposed. The mechanism adopts the Siamese instance search tracker (SINT) as the re-detection network. When multiple peaks appear on the response map of SiamFC, a more accurate re-detection network can re-determine the location of the object. Meanwhile, for the sake of adapting to various changes in appearance of the object, this paper employs a generative model to construct the templates of SiamFC. Furthermore, a method of template updating with high confidence is also used to prevent the template from being contaminated. Objective evaluation on the popular online tracking benchmark (OTB) shows that the tracking accuracy and the success rate of the proposed framework can reach 79.8% and 63.8%, respectively. Compared to SiamFC, the results of several representative video sequences demonstrate that our framework has higher accuracy and robustness in scenes with fast motion, occlusion, background clutter, and illumination variations.


2020 ◽  
Vol 10 (2) ◽  
pp. 713 ◽  
Author(s):  
Jungsup Shin ◽  
Heegwang Kim ◽  
Dohun Kim ◽  
Joonki Paik

Object tracking has long been an active research topic in image processing and computer vision fields with various application areas. For practical applications, the object tracking technique should be not only accurate but also fast in a real-time streaming condition. Recently, deep feature-based trackers have been proposed to achieve a higher accuracy, but those are not suitable for real-time tracking because of an extremely slow processing speed. The slow speed is a major factor to degrade tracking accuracy under a real-time streaming condition since the processing delay forces skipping frames. To increase the tracking accuracy with preserving the processing speed, this paper presents an improved kernelized correlation filter (KCF)-based tracking method that integrates three functional modules: (i) tracking failure detection, (ii) re-tracking using multiple search windows, and (iii) motion vector analysis to decide a preferred search window. Under a real-time streaming condition, the proposed method yields better results than the original KCF in the sense of tracking accuracy, and when a target has a very large movement, the proposed method outperforms a deep learning-based tracker, such as multi-domain convolutional neural network (MDNet).


Author(s):  
Sheikh Summerah

Abstract: This study presents a strategy to automate the process to recognize and track objects using color and motion. Video Tracking is the approach to detect a moving item using a camera across the long distance. The basic goal of video tracking is in successive video frames to link target objects. When objects move quicker in proportion to frame rate, the connection might be particularly difficult. This work develops a method to follow moving objects in real-time utilizing HSV color space values and OpenCV in distinct video frames.. We start by deriving the HSV value of an object to be tracked and then in the testing stage, track the object. It was seen that the objects were tracked with 90% accuracy. Keywords: HSV, OpenCV, Object tracking,


Author(s):  
JIANGJIAN XIAO ◽  
HUI CHENG ◽  
FENG HAN ◽  
HARPREET SAWHNEY

This paper presents an approach to extract semantic layers from aerial surveillance videos for scene understanding and object tracking. The input videos are captured by low flying aerial platforms and typically consist of strong parallax from non-ground-plane structures as well as moving objects. Our approach leverages the geo-registration between video frames and reference images (such as those available from Terraserver and Google satellite imagery) to establish a unique geo-spatial coordinate system for pixels in the video. The geo-registration process enables Euclidean 3D reconstruction with absolute scale unlike traditional monocular structure from motion where continuous scale estimation over long periods of time is an issue. Geo-registration also enables correlation of video data to other stored information sources such as GIS (Geo-spatial Information System) databases. In addition to the geo-registration and 3D reconstruction aspects, the other key contributions of this paper also include: (1) providing a reliable geo-based solution to estimate camera pose for 3D reconstruction, (2) exploiting appearance and 3D shape constraints derived from geo-registered videos for labeling of structures such as buildings, foliage, and roads for scene understanding, and (3) elimination of moving object detection and tracking errors using 3D parallax constraints and semantic labels derived from geo-registered videos. Experimental results on extended time aerial video data demonstrates the qualitative and quantitative aspects of our work.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3945
Author(s):  
Kaiheng Dai ◽  
Yuehuan Wang ◽  
Qiong Song

In this paper, we propose a fast and accurate deep network-based object tracking method, which combines feature representation, template tracking and foreground detection into a single framework for robust tracking. The proposed framework consists of a backbone network, which feeds into two parallel networks, TmpNet for template tracking and FgNet for foreground detection. The backbone network is a pre-trained modified VGG network, in which a few parameters need to be fine-tuned for adapting to the tracked object. FgNet is a fully convolutional network to distinguish the foreground from background in a pixel-to-pixel manner. The parameter in TmpNet is the learned channel-wise target template, which initializes in the first frame and performs fast template tracking in the test frames. To enable each component to work closely with each other, we use a multi-task loss to end-to-end train the proposed framework. In online tracking, we combine the score maps from TmpNet and FgNet to find the optimal tracking results. Experimental results on object tracking benchmarks demonstrate that our approach achieves favorable tracking accuracy against the state-of-the-art trackers while running at a real-time speed of 38 fps.


2021 ◽  
Vol 13 (18) ◽  
pp. 3601
Author(s):  
Jin Wu ◽  
Changqing Cao ◽  
Yuedong Zhou ◽  
Xiaodong Zeng ◽  
Zhejun Feng ◽  
...  

In remote sensing images, small target size and diverse background cause difficulty in locating targets accurately and quickly. To address the lack of accuracy and inefficient real-time performance of existing tracking algorithms, a multi-object tracking (MOT) algorithm for ships using deep learning was proposed in this study. The feature extraction capability of target detectors determines the performance of MOT algorithms. Therefore, you only look once (YOLO)-v3 model, which has better accuracy and speed than other algorithms, was selected as the target detection framework. The high similarity of ship targets will cause poor tracking results; therefore, we used the multiple granularity network (MGN) to extract richer target appearance information to improve the generalization ability of similar images. We compared the proposed algorithm with other state-of-the-art multi-object tracking algorithms. Results show that the tracking accuracy is improved by 2.23%, while the average running speed is close to 21 frames per second, meeting the needs of real-time tracking.


Author(s):  
Gowher Shafi

Abstract: This research shows how to use colour and movement to automate the process of recognising and tracking things. Video tracking is a technique for detecting a moving object over a long distance using a camera. The main purpose of video tracking is to connect target objects in subsequent video frames. The connection may be particularly troublesome when things move faster than the frame rate. Using HSV colour space values and OpenCV in different video frames, this study proposes a way to track moving objects in real-time. We begin by calculating the HSV value of an item to be monitored, and then we track the object throughout the testing step. The items were shown to be tracked with 90 percent accuracy. Keywords: HSV, OpenCV, Object tracking, Video frames, GUI


Sign in / Sign up

Export Citation Format

Share Document