scholarly journals Integration of Wireless Gesture Tracking, Object Tracking, and 3D Reconstruction in the Perceptive Workbench

Author(s):  
Bastian Leibe ◽  
David Minnen ◽  
Justin Weeks ◽  
Thad Starner
2003 ◽  
Vol 14 (1) ◽  
pp. 59-71 ◽  
Author(s):  
Thad Starner ◽  
Bastian Leibe ◽  
David Minnen ◽  
Tracy Westyn ◽  
Amy Hurst ◽  
...  

1999 ◽  
Vol 35 (5) ◽  
pp. 675-683 ◽  
Author(s):  
Koichiro DEGUCHI ◽  
Shingo KAGAMI ◽  
Satoshi SAGA ◽  
Hidekata HONTANI

2018 ◽  
Vol 152 ◽  
pp. 03001
Author(s):  
Yun Zhe Cheong ◽  
Wei Jen Chew

Object tracking is a computer vision field that involves identifying and tracking either a single or multiple objects in an environment. This is extremely useful to help observe the movements of the target object like people in the street or cars on the road. However, a common issue with tracking an object in an environment with many moving objects is occlusion. Occlusion can cause the system to lose track of the object being tracked or after overlapping, the wrong object will be tracked instead. In this paper, a system that is able to correctly track occluded objects is proposed. This system includes algorithms such as foreground object segmentation, colour tracking, object specification and occlusion handling. An input video is input to the system and every single frame of the video is analysed. The foreground objects are segmented with object segmentation algorithm and tracked with the colour tracking algorithm. An ID is assigned to each tracked object. Results obtained shows that the proposed system is able to continuously track an object and maintain the correct identity even after is has been occluded by another object.


2020 ◽  
Vol 17 (2) ◽  
pp. 362-371
Author(s):  
Wahyu Supriyatin

Object tracking one of computer vision. Computer vision similar to human eye function. The difficulty is to detect presence an object and object tracking application made. Object tracking used in aircraft, track cars, human body detectors at airports, a regulator the number of vehicles pass and navigation tools on robots. This study is to identify objects that pass in frame. This research also count the number of objects that pass in one frame. Object tracking done by comparing two algorithms namely Horn-Schunck and Lucas-Kanade. Both algorithms tested using the Source Block Parameter and Function Block Parameter. The test carried out with video resolution 120x160 and the position camera is 2-4 m. The object tracking test is conducted in the duration of 110-120 seconds. Stages tracking object was thresholding, filtering and region successfully obtain object binary video. The Lucas-Kanade has faster in identifying objects compared to the Horn-Schunck algorithm.


2019 ◽  
Vol 16 (04) ◽  
pp. 1950017
Author(s):  
Sheng Liu ◽  
Yangqing Wang ◽  
Fengji Dai ◽  
Jingxiang Yu

Motion detection and object tracking play important roles in unsupervised human–machine interaction systems. Nevertheless, the human–machine interaction would become invalid when the system fails to detect the scene objects correctly due to occlusion and limited field of view. Thus, robust long-term tracking of scene objects is vital. In this paper, we present a 3D motion detection and long-term tracking system with simultaneous 3D reconstruction of dynamic objects. In order to achieve the high precision motion detection, an optimization framework with a novel motion pose estimation energy function is provided in the proposed method by which the 3D motion pose of each object can be estimated independently. We also develop an accurate object-tracking method which combines 2D visual information and depth. We incorporate a novel boundary-optimization segmentation based on 2D visual information and depth to improve the robustness of tracking significantly. Besides, we also introduce a new fusion and updating strategy in the 3D reconstruction process. This strategy brings higher robustness to 3D motion detection. Experiments results show that, for synthetic sequences, the root-mean-square error (RMSE) of our system is much smaller than Co-Fusion (CF); our system performs extremely well in 3D motion detection accuracy. In the case of occlusion or out-of-view on real scene data, CF will suffer the loss of tracking or object-label changing, by contrast, our system can always keep the robust tracking and maintain the correct labels for each dynamic object. Therefore, our system is robust to occlusion and out-of-view application scenarios.


2011 ◽  
Vol 1 (2) ◽  
pp. 89
Author(s):  
Dina Nurul Fitria ◽  
Ikhwan B. Zarkasi ◽  
Rose Maulidiyatul H

<p style="text-align: justify;" align="center">Banyak cara untuk dapat mendeteksi keamanan sebuah wilayah tertentu. Salah satu cara pengamanan yang bisa digunakan adalah dengan menggunakan pemantauan berbasis video pengawasan (<em>video surveillance</em>). Sebenarnya, video pengawasan sudah banyak digunakan di Indonesia. Tetapi, umumnya video pengawasan ini hanya mampu merekam gambar, tanpa ada kemampuan pintar yakni, <em>object tracking, object recognition</em> dan <em>object analyzing</em>. Sehingga, hasil yang diharapkan kurang maksimal dan belum bisa membantu tugas pengawasan secara keseluruhan. Paper ini bertujuan untuk membuat algoritma dari <em>object tracking</em> yang ada pada video pengawasan sebagai rujukan pengembangan video pengawasan dengan kemampuan <em>object recognition</em> dan <em>object analyzing</em>. Masalah utama yang sering muncul dalam pembuatan <em>object tracking</em> adalah ketika terjadi<em> occlusion</em> (tumpang tindih) antara dua <em>object </em>dalam sebuah frame. Pada saat <em>occlusion</em>, <em>object </em>yang sama pada frame yang berbeda kemungkinan dapat dikenali sebagai<em> object</em> yang berbeda. Sehingga, proses <em>object tracking</em> akan menjadi terganggu. <em>Bayesian Networks</em> memungkinkan untuk membandingkan data yang didapat dari masing-masing <em>object </em>yang ada <em>(likelihood)</em> dengan data awal yang telah dimiliki <em>(prior)</em>, dengan menghitung <em>Maximum A-Posteriori Probability</em>(MAP) yang dimiliki, sehingga <em>object </em>yang sama pada frame yang berbeda tetap akan dikenali sebagai <em>object</em> yang sama</p><h6 style="text-align: center;"><strong> </strong><strong>Abstract</strong></h6><p style="text-align: justify;" align="center">There are many ways/technique to detect the security/safety of fixed area. One of security technique that can be used is by using monitoring based on Video surveillance. In fact, this monitoring video has already been used in Indonesia. But, video surveillance, commonly, just can record images without any smart abilities, such as object tracking, object recognition and object analyzing. So, the expected result is not optimal and still not be able to help monitoring role totally. This research is aimed to make the algorithm of object trackingin video surveillance, in order to be reference for development of video surveillance with ability of object recognition and object analyzing. The main problem that frequently comes up on the making of object tracking is occlusion between two objects in a single frame. When occlusion is happened, same object in different frame probably can be recognized as two different objects. So, the process of object tracking can be disturbed. Bayesian Network is enable to compare data that got from every object (likelihood) with prior data that has already been provided by counting its Maximum A-Posteriori Probability (MAP), so same object in different frame are still be able to be recognized as same object.</p>


2013 ◽  
Vol 303-306 ◽  
pp. 1552-1555 ◽  
Author(s):  
Zhao Xia Fu ◽  
Li Ming Wang

As one of the crucial issues of computer vision, video object tracking is widely used in many applications, such as visual surveillance, human-computer interaction, visual transportation, visual navigation of robots, military guidance, etc. The existing object tracking algorithms in engineering applications have the huge amount of computation, which can not meet the needs of real-time system applications, and the tracking accuracy is not high. So a simple and practical video object tracking algorithm is proposed in this paper. The Otsu algorithm is used for image binarization to filter the background, and the object edge is further processed based on mathematical morphology, and thus the tracking object is more clearly. The centroid weighted method determines the location of the center of the object only by one step calculation, which makes the location more accurate. The experimental results show that the algorithm of the paper is effective for detecting and tracking of a moving object in a static scene and it has a low complexity.


Author(s):  
JIANGJIAN XIAO ◽  
HUI CHENG ◽  
FENG HAN ◽  
HARPREET SAWHNEY

This paper presents an approach to extract semantic layers from aerial surveillance videos for scene understanding and object tracking. The input videos are captured by low flying aerial platforms and typically consist of strong parallax from non-ground-plane structures as well as moving objects. Our approach leverages the geo-registration between video frames and reference images (such as those available from Terraserver and Google satellite imagery) to establish a unique geo-spatial coordinate system for pixels in the video. The geo-registration process enables Euclidean 3D reconstruction with absolute scale unlike traditional monocular structure from motion where continuous scale estimation over long periods of time is an issue. Geo-registration also enables correlation of video data to other stored information sources such as GIS (Geo-spatial Information System) databases. In addition to the geo-registration and 3D reconstruction aspects, the other key contributions of this paper also include: (1) providing a reliable geo-based solution to estimate camera pose for 3D reconstruction, (2) exploiting appearance and 3D shape constraints derived from geo-registered videos for labeling of structures such as buildings, foliage, and roads for scene understanding, and (3) elimination of moving object detection and tracking errors using 3D parallax constraints and semantic labels derived from geo-registered videos. Experimental results on extended time aerial video data demonstrates the qualitative and quantitative aspects of our work.


2020 ◽  
Vol 10 (21) ◽  
pp. 7622
Author(s):  
Stéphane Vujasinović ◽  
Stefan Becker ◽  
Timo Breuer ◽  
Sebastian Bullinger ◽  
Norbert Scherer-Negenborn ◽  
...  

Single visual object tracking from an unmanned aerial vehicle (UAV) poses fundamental challenges such as object occlusion, small-scale objects, background clutter, and abrupt camera motion. To tackle these difficulties, we propose to integrate the 3D structure of the observed scene into a detection-by-tracking algorithm. We introduce a pipeline that combines a model-free visual object tracker, a sparse 3D reconstruction, and a state estimator. The 3D reconstruction of the scene is computed with an image-based Structure-from-Motion (SfM) component that enables us to leverage a state estimator in the corresponding 3D scene during tracking. By representing the position of the target in 3D space rather than in image space, we stabilize the tracking during ego-motion and improve the handling of occlusions, background clutter, and small-scale objects. We evaluated our approach on prototypical image sequences, captured from a UAV with low-altitude oblique views. For this purpose, we adapted an existing dataset for visual object tracking and reconstructed the observed scene in 3D. The experimental results demonstrate that the proposed approach outperforms methods using plain visual cues as well as approaches leveraging image-space-based state estimations. We believe that our approach can be beneficial for trafficmonitoring, video surveillance, and navigation.


Sign in / Sign up

Export Citation Format

Share Document