moving camera
Recently Published Documents


TOTAL DOCUMENTS

339
(FIVE YEARS 37)

H-INDEX

25
(FIVE YEARS 2)

Author(s):  
Ashutosh Kumar ◽  
Takehiro Kashiyama ◽  
Hiroya Maeda ◽  
Yoshihide Sekimoto

Author(s):  
Sohee Son ◽  
Jeongin Kwon ◽  
Hui-Yong Kim ◽  
Haechul Choi

Unmanned aerial vehicles like drones are one of the key development technologies with many beneficial applications. As they have made great progress, security and privacy issues are also growing. Drone tacking with a moving camera is one of the important methods to solve these issues. There are various challenges of drone tracking. First, drones move quickly and are usually tiny. Second, images captured by a moving camera have illumination changes. Moreover, the tracking should be performed in real-time for surveillance applications. For fast and accurate drone tracking, this paper proposes a tracking framework utilizing two trackers, a predictor, and a refinement process. One tracker finds a moving target based on motion flow and the other tracker locates the region of interest (ROI) employing histogram features. The predictor estimates the trajectory of the target by using a Kalman filter. The predictor contributes to keeping track of the target even if the trackers fail. Lastly, the refinement process decides the location of the target taking advantage of ROIs from the trackers and the predictor. In experiments on our dataset containing tiny flying drones, the proposed method achieved an average success rate of 1.134 times higher than conventional tracking methods and it performed at an average run-time of 21.08 frames per second.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Tao Liu ◽  
Yong Liu

Moving camera-based object tracking method for the intelligent transportation system (ITS) has drawn increasing attention. The unpredictability of driving environments and noise from the camera calibration, however, make conventional ground plane estimation unreliable and adversely affecting the tracking result. In this paper, we propose an object tracking system using an adaptive ground plane estimation algorithm, facilitated with constrained multiple kernel (CMK) tracking and Kalman filtering, to continuously update the location of moving objects. The proposed algorithm takes advantage of the structure from motion (SfM) to estimate the pose of moving camera, and then the estimated camera’s yaw angle is used as a feedback to improve the accuracy of the ground plane estimation. To further robustly and efficiently tracking objects under occlusion, the constrained multiple kernel tracking technique is adopted in the proposed system to track moving objects in 3D space (depth). The proposed system is evaluated on several challenging datasets, and the experimental results show the favorable performance, which not only can efficiently track on-road objects in a dashcam equipped on a free-moving vehicle but also can well handle occlusion in the tracking.


Author(s):  
J. Wei ◽  
J. Jiang ◽  
A. Yilmaz

Abstract. Background subtraction aims at detecting salient background which in return provides regions of moving objects referred to as the foreground. Background subtraction inherently uses the temporal relations by including time dimension in its formulation. Alternative techniques to background subtraction require stationary cameras for learning the background. Stationary cameras provide semi-constant background images that make learning salient background easier. Still cameras, however, are not applicable to moving camera scenarios, such as vehicle embedded camera for autonomous driving. For moving cameras, due to the complexity of modelling changing background, recent approaches focus on directly detecting the foreground objects in each frame independently. This treatment, however, requires learning all possible objects that can appear in the field of view. In this paper, we achieve background subtraction for moving cameras using specialized deep learning approach, the Moving-camera Background Subtraction Network (MBS-Net). Our approach is robust to detect changing background in various scenarios and does not require training on foreground objects. The developed approach uses temporal cues from past frames by applying Conditional Random Fields as a part of the developed neural network. Our proposed method have a good performance on ApolloScape dataset (Huang et al., 2018) with resolution 3384 × 2710 videos. To the best of our acknowledge, this paper is the first to propose background subtraction for moving cameras using deep learning.


Author(s):  
Tatsuhisa Watanabe ◽  
Tomoharu Nakashima ◽  
Yoshifumi Kusunoki

This paper tackles change detection for area surveillance with a moving device. None of the existing datasets for change detection meets a surveillance scenario where a camera is mounted on a moving platform and pointed in the direction of moving. Thus, this paper creates a new dataset including several challenging points. For this dataset, this paper employs a composable method and proposes some components. To evaluate the proposed components, some corresponding classic methods were also tested on the dataset. As a result, the proposals outperformed them. Moreover, this paper investigated the relationship between the parameters of the components and their performance.


2021 ◽  
Author(s):  
Felix Wimbauer ◽  
Nan Yang ◽  
Lukas von Stumberg ◽  
Niclas Zeller ◽  
Daniel Cremers

Sign in / Sign up

Export Citation Format

Share Document