scholarly journals Intelligent Querying in Camera Networks for Efficient Target Tracking

Author(s):  
Anil Sharma

Visual analytics applications often rely on target tracking across a network of cameras for inference and prediction. A network of cameras generates immense amount of video data and processing it for tracking a target is highly computationally expensive. Related works typically use data association and visual re-identification techniques to match target templates across multiple cameras. In this thesis, I propose to formulate this scheduling problem as a Markov Decision Process (MDP) and present a reinforcement learning based solution to schedule cameras by selecting one where the target is most likely to appear next. The proposed approach can be learned directly from data and doesn't require any information of the camera network topology. NLPR MCT and DukeMTMC datasets are used to show that the proposed policy significantly reduces the number of frames to be processed for tracking and identifies the camera schedule with high accuracy as compared to the related approaches. Finally, I will be formulating an end-to-end pipeline for target tracking that will learn a policy to find the camera schedule and to track the target in the individual camera frames of the schedule.

2020 ◽  
Vol 45 (4) ◽  
pp. 1445-1465
Author(s):  
Loe Schlicher ◽  
Marco Slikker ◽  
Willem van Jaarsveld ◽  
Geert-Jan van Houtum

We study several service providers that keep spare parts in stock to protect for downtime of their high-tech machines and that face different downtime costs per stockout. Service providers can cooperate by forming a joint spare parts pool, and we study the allocation of the joint costs to the individual service providers by studying an associated cooperative game. In extant literature, the joint spare parts pool is typically controlled by a suboptimal full-pooling policy. A full-pooling policy may lead to an empty core of the associated cooperative game, and we show this result in our setting as well. We then focus on situations where service providers apply an optimal policy: a stratification that determines, depending on the real-time on-hand inventory, which service providers may take parts from the pool. We formulate the associated stratified pooling game by defining each coalitional value in terms of the minimal long-run average costs of a Markov decision process. We present a proof demonstrating that stratified pooling games always have a nonempty core. This five-step proof is of interest in itself, because it may be more generally applicable for other cooperative games where coalitional values can be defined in terms of Markov decision processes.


2018 ◽  
Vol 15 (4) ◽  
pp. 172988141878706 ◽  
Author(s):  
Yunyun Zhao ◽  
Xiangke Wang ◽  
Yirui Cong ◽  
Lincheng Shen

In this article, we study the ground moving target tracking problem for a fixed-wing unmanned aerial vehicle equipped with a radar. This problem is formulated in a partially observable Markov process framework, which contains the following two parts: in the first part, the unmanned aerial vehicle utilizes the measurements from its radar and employs a Kalman filter to estimate the target’s real-time location; in the second part, the unmanned aerial vehicle optimizes its trajectory in a real-time manner so that the radar’s measurements can include more useful information. To solve the trajectory optimization problem, we proposed an information geometry-based partially observable Markov decision process method. Specifically, the cumulative amount of information in the observation is represented by Fisher information of information geometry, and acts as the criterion of the partially observable Markov decision process problem. Furthermore, to guarantee the real-time performance, an important trade-off between the optimality and computation cost is made by an approximate receding horizon approach. Finally, simulation results corroborate the accuracy and time-efficiency of our proposed method and also show our advantage in computation time compared to existing methods.


Data Association in Distributed Camera Network is a new method to analyse the large volume of video information in camera networking. It is an important step in multi camera multi target tracking. Distributed processing is a new paradigm to analyse the videos in camera network and each camera acts on its own and all cameras cooperatively work together to achieve a common goal, In this paper, we have addresses the problem of Distributed Data Association(DDA) to obtain the feet position of the object. These positions are shared with its immediate neighbours and find local matches using homography. By propagating these local matches across the network in order to obtain the global associations. In this proposed method DDA is less complex and improves the high accuracy compared to the centralized methods (STSPIE, EMTIC, JPDAEKCF, CSPIF, and CEIF).


1998 ◽  
Vol 30 (01) ◽  
pp. 122-136
Author(s):  
E. J. Collins ◽  
J. M. McNamara

We consider a problem similar in many respects to a finite horizon Markov decision process, except that the reward to the individual is a strictly concave functional of the distribution of the state of the individual at final time T. Reward structures such as these are of interest to biologists studying the fitness of different strategies in a fluctuating environment. The problem fails to satisfy the usual optimality equation and cannot be solved directly by dynamic programming. We establish equations characterising the optimal final distribution and an optimal policy π*. We show that in general π* will be a Markov randomised policy (or equivalently a mixture of Markov deterministic policies) and we develop an iterative, policy improvement based algorithm which converges to π*. We also consider an infinite population version of the problem, and show that the population cannot do better using a coordinated policy than by each individual independently following the individual optimal policy π*.


1998 ◽  
Vol 30 (1) ◽  
pp. 122-136 ◽  
Author(s):  
E. J. Collins ◽  
J. M. McNamara

We consider a problem similar in many respects to a finite horizon Markov decision process, except that the reward to the individual is a strictly concave functional of the distribution of the state of the individual at final time T. Reward structures such as these are of interest to biologists studying the fitness of different strategies in a fluctuating environment. The problem fails to satisfy the usual optimality equation and cannot be solved directly by dynamic programming. We establish equations characterising the optimal final distribution and an optimal policy π*. We show that in general π* will be a Markov randomised policy (or equivalently a mixture of Markov deterministic policies) and we develop an iterative, policy improvement based algorithm which converges to π*. We also consider an infinite population version of the problem, and show that the population cannot do better using a coordinated policy than by each individual independently following the individual optimal policy π*.


Sign in / Sign up

Export Citation Format

Share Document