scholarly journals Tracking Individual Targets for high Density Crowd Scenes Analysis

2020 ◽  
Author(s):  
Jack Norris ◽  
Wesley Creveling ◽  
Ernest Porter ◽  
Emory Vassel

In this paper we present a number of methods (manual, semi-automatic and automatic) for tracking individual targets in high density crowd scenes where thousand of people are gathered. The necessary data about the motion of individuals and a lot of other physical information can be extracted from consecutive image sequences in different ways, including optical flow and block motion estimation. One of the famous methods for tracking moving objects is the block matching method. This way to estimate subject motion requires the specification of a comparison window which determines the scale of the estimate.In this work we present a real-time method for pedestrian recognition and tracking in sequences of high resolution images obtained by a stationary (high definition) camera located in different places on the Haram mosque in Mecca. The objective is to estimate pedestrian velocities as a function of the local density.The resulting data of tracking moving pedestrians based on video sequences are presented in the following section. Through the evaluated system the spatio-temporal coordinates of each pedestrian during the Tawaf ritual are established. The pilgrim velocities as function of the local densities in the Mataf area (Haram Mosque Mecca) are illustrated and very precisely documented. Tracking in such places where pedestrian density reaches 7 to 8 Persons/m$^2$ is extremely challenging due to the small numberof pixels on the target, appearance ambiguity resulting from the dense packing, and severe inter-object occlusions. The tracking method which is outlined in this paper overcomes these challenges by using a virtual camera which is matched in position, rotation and focal length to the original camera in such a way that the features of the 3D-model match the feature position of the filmed mosque. In this model an individual feature has to be identified by eye, where contrast is a criterion. We do know that the pilgrims walk on a plane, and after matching the camera we also have the height of the plane in 3D-space from our 3D-model. A point object is placed at the position of a selected pedestrian. During the animation we set multiple animation-keys (approximately every 25 to 50 frames which equals 1 to 2 seconds) for the position, such that the position of the point and the pedestrian overlay nearly at every time.By combining all these variables with the available appearance information, we are able to track individual targets in high density crowds.

Author(s):  
Richard M. Ziernicki ◽  
Angelos G. Leiloglo ◽  
Taylor Spiegelberg ◽  
Kurt Twigg

This paper presents a methodology that uses the photogrammetric process of matchmoving for analyzing objects (vehicles, pedestrians, etc.) visible in video captured by moving cameras. Matchmoving is an established scientific process that is used to calibrate a virtual camera to “match” the movement and optic properties of the real-world camera that captured the video. High-definition 3D laser scanning technology makes it possible to accurately perform the matchmoving process and evaluate the results. Once a virtual camera is accurately calibrated, moving objects visible in the video can be tracked or matched to determine their position, orientation, path, speed, and acceleration. Specific applications of the matchmoving methodology are presented and discussed in this paper and include analysis performed on video footage from a metro bus on-board camera, police officer body-worn camera footage, and race track video footage captured by a drone. In all cases, the matchmoving process yielded highly accurate camera calibrations and allowed forensic investigators to accurately determine and evaluate the dynamics of moving objects depicted in the video.


2021 ◽  
Vol 13 (2) ◽  
pp. 690
Author(s):  
Tao Wu ◽  
Huiqing Shen ◽  
Jianxin Qin ◽  
Longgang Xiang

Identifying stops from GPS trajectories is one of the main concerns in the study of moving objects and has a major effect on a wide variety of location-based services and applications. Although the spatial and non-spatial characteristics of trajectories have been widely investigated for the identification of stops, few studies have concentrated on the impacts of the contextual features, which are also connected to the road network and nearby Points of Interest (POIs). In order to obtain more precise stop information from moving objects, this paper proposes and implements a novel approach that represents a spatio-temproal dynamics relationship between stopping behaviors and geospatial elements to detect stops. The relationship between the candidate stops based on the standard time–distance threshold approach and the surrounding environmental elements are integrated in a complex way (the mobility context cube) to extract stop features and precisely derive stops using the classifier classification. The methodology presented is designed to reduce the error rate of detection of stops in the work of trajectory data mining. It turns out that 26 features can contribute to recognizing stop behaviors from trajectory data. Additionally, experiments on a real-world trajectory dataset further demonstrate the effectiveness of the proposed approach in improving the accuracy of identifying stops from trajectories.


2011 ◽  
Vol 145 ◽  
pp. 277-281
Author(s):  
Vaci Istanda ◽  
Tsong Yi Chen ◽  
Wan Chun Lee ◽  
Yuan Chen Liu ◽  
Wen Yen Chen

As the development of network learning, video compression is important for both data transmission and storage, especially in a digit channel. In this paper, we present the return prediction search (RPS) algorithm for block motion estimation. The proposed algorithm exploits the temporal correlation and characteristic of returning origin to obtain one or two predictive motion vector and selects one motion vector, which presents better result, to be the initial search center. In addition, we utilize the center-biased block matching algorithms to refine the final motion vector. Moreover, we used adaptive threshold technique to reduce the computational complexity in motion estimation. Experimental results show that RPS algorithm combined with 4SS, BBGDS, and UCBDS effectively improves the performance in terms of mean-square error measure with less average searching points. On the other hand, accelerated RPS (ARPS) algorithm takes only 38% of the searching computations than 3SS algorithm, and the reconstruction image quality of the ARPS algorithm is superior to 3SS algorithm about 0.30dB in average overall test sequences. In addition, we create an asynchronous learning environment which provides students and instructors flexibility in learning and teaching activities. The purpose of this web site is to teach and display our researchable results. Therefore, we believe this web site is one of the keys to help the modern student achieve mastery of complex Motion Estimation.


Sign in / Sign up

Export Citation Format

Share Document