scholarly journals A Robust Tracking-by-Detection Algorithm Using Adaptive Accumulated Frame Differencing and Corner Features

2020 ◽  
Vol 6 (4) ◽  
pp. 25
Author(s):  
Nahlah Algethami ◽  
Sam Redfern

We propose a tracking-by-detection algorithm to track the movements of meeting participants from an overhead camera. An advantage of using overhead cameras is that all objects can typically be seen clearly, with little occlusion; however, detecting people from a wide-angle overhead view also poses challenges such as people’s appearance significantly changing due to their position in the wide-angle image, and generally from a lack of strong image features. Our experimental datasets do not include empty meeting rooms, and this means that standard motion based detection techniques (e.g., background subtraction or consecutive frame differencing) struggle since there is no prior knowledge for a background model. Additionally, standard techniques may perform poorly when there is a wide range of movement behaviours (e.g. periods of no movement and periods of fast movement), as is often the case in meetings. Our algorithm uses a novel coarse-to-fine detection and tracking approach, combining motion detection using adaptive accumulated frame differencing (AAFD) with Shi-Tomasi corner detection. We present quantitative and qualitative evaluation which demonstrates the robustness of our method to track people in environments where object features are not clear and have similar colour to the background. We show that our approach achieves excellent performance in terms of the multiple object tracking accuracy (MOTA) metrics, and that it is particularly robust to initialisation differences when compared with baseline and state of the art trackers. Using the Online Tracking Benchmark (OTB) videos we also demonstrate that our tracker is very strong in the presence of background clutter, deformation and illumination variation.

2011 ◽  
Vol 271-273 ◽  
pp. 19-23 ◽  
Author(s):  
Jian Shu Gao ◽  
Tao Yang ◽  
Zhi Jing Yu

With the aim to remove the unwanted feature points, template matching algorithm which includes pixel-coherence and fixed structure is proposed in this paper. Corner detection algorithm can extract image feature points with the flexibility for illumination variation and affine transformation. This paper gets the special points of plane image by corner detection algorithm and proposes the problem of existing more useless point. Then, template matching algorithm is used to solve the issue and collects the best match points. Finally, the paper compares the template image and the matched image. The experimental results prove that the feature points by template matching algorithm have excellent accurate characteristic for the illumination, transfer and the rotation transform, and they are highly satisfactory.


Author(s):  
J. Li-Chee-Ming ◽  
C. Armenakis

This work demonstrates an approach to automatically initialize a visual model-based tracker, and recover from lost tracking, without prior camera pose information. These approaches are commonly referred to as <i>tracking-by-detection</i>. Previous tracking-by-detection techniques used either fiducials (i.e. landmarks or markers) or the object’s texture. The main contribution of this work is the development of a tracking-by-detection algorithm that is based solely on natural geometric features. A variant of geometric hashing, a model-to-image registration algorithm, is proposed that searches for a matching panoramic image from a database of synthetic panoramic images captured in a 3D virtual environment. The approach identifies corresponding features between the matched panoramic images. The corresponding features are to be used in a photogrammetric space resection to estimate the camera pose. The experiments apply this algorithm to initialize a model-based tracker in an indoor environment using the 3D CAD model of the building.


2019 ◽  
Vol 59 (3) ◽  
pp. 277-284 ◽  
Author(s):  
Wu Weibing

Agricultural automation and intelligence have a wide range of connotations, involving navigation, image, model, strategy and other engineering disciplines. With the development of modern agriculture are applied in many engineering areas. The operating environment of agricultural vehicles is very complex, especially as they often face obstacles, affecting the intelligent operation of agricultural vehicles. The traditional obstacle detection mostly uses the limited detection algorithm, in the case of which it is difficult to achieve the moving target detection of panoramic vision. In this paper, mean shift algorithm is selected to detect the moving obstacles of intelligent agricultural vehicles, and adaptive colour fusion is introduced to optimize the algorithm to solve the problems of mean shift. In order to verify the effect of the improvement and application of the algorithm, the video image obtained by the intelligent agricultural vehicle is selected for the simulation experiment, and the best combination (- 0.8.0.2) is obtained for the unequal spacing sampling method. In the process of colour selection, the coefficient needs to be adjusted continuously to improve the tracking accuracy of the algorithm. Further it can be seen that when using a variety of different quantitative methods for comparative analysis, the quantitative method of HIS-360 level is determined.


2019 ◽  
Vol 28 (3) ◽  
pp. 1257-1267 ◽  
Author(s):  
Priya Kucheria ◽  
McKay Moore Sohlberg ◽  
Jason Prideaux ◽  
Stephen Fickas

PurposeAn important predictor of postsecondary academic success is an individual's reading comprehension skills. Postsecondary readers apply a wide range of behavioral strategies to process text for learning purposes. Currently, no tools exist to detect a reader's use of strategies. The primary aim of this study was to develop Read, Understand, Learn, & Excel, an automated tool designed to detect reading strategy use and explore its accuracy in detecting strategies when students read digital, expository text.MethodAn iterative design was used to develop the computer algorithm for detecting 9 reading strategies. Twelve undergraduate students read 2 expository texts that were equated for length and complexity. A human observer documented the strategies employed by each reader, whereas the computer used digital sequences to detect the same strategies. Data were then coded and analyzed to determine agreement between the 2 sources of strategy detection (i.e., the computer and the observer).ResultsAgreement between the computer- and human-coded strategies was 75% or higher for 6 out of the 9 strategies. Only 3 out of the 9 strategies–previewing content, evaluating amount of remaining text, and periodic review and/or iterative summarizing–had less than 60% agreement.ConclusionRead, Understand, Learn, & Excel provides proof of concept that a reader's approach to engaging with academic text can be objectively and automatically captured. Clinical implications and suggestions to improve the sensitivity of the code are discussed.Supplemental Materialhttps://doi.org/10.23641/asha.8204786


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


Author(s):  
Songqi Han ◽  
Weibo Yu ◽  
Hongtao Yang ◽  
Shicheng Wan

2007 ◽  
Author(s):  
Desen Yin ◽  
Yuejin Zhao ◽  
Bin Wang ◽  
Qian Song ◽  
Rongrong Cheng

Sign in / Sign up

Export Citation Format

Share Document