scholarly journals Asynchronous event feature generation and tracking based on gradient descriptor for event cameras

2021 ◽  
Vol 18 (4) ◽  
pp. 172988142110270
Author(s):  
Ruoxiang Li ◽  
Dianxi Shi ◽  
Yongjun Zhang ◽  
Ruihao Li ◽  
Mingkun Wang

Recently, the event camera has become a popular and promising vision sensor in the research of simultaneous localization and mapping and computer vision owing to its advantages: low latency, high dynamic range, and high temporal resolution. As a basic part of the feature-based SLAM system, the feature tracking method using event cameras is still an open question. In this article, we present a novel asynchronous event feature generation and tracking algorithm operating directly on event-streams to fully utilize the natural asynchronism of event cameras. The proposed algorithm consists of an event-corner detection unit, a descriptor construction unit, and an event feature tracking unit. The event-corner detection unit addresses a fast and asynchronous corner detector to extract event-corners from event-streams. For the descriptor construction unit, we propose a novel asynchronous gradient descriptor inspired by the scale-invariant feature transform descriptor, which helps to achieve quantitative measurement of similarity between event feature pairs. The construction of the gradient descriptor can be decomposed into three stages: speed-invariant time surface maintenance and extraction, principal orientation calculation, and descriptor generation. The event feature tracking unit combines the constructed gradient descriptor and an event feature matching method to achieve asynchronous feature tracking. We implement the proposed algorithm in C++ and evaluate it on a public event dataset. The experimental results show that our proposed method achieves improvement in terms of tracking accuracy and real-time performance when compared with the state-of-the-art asynchronous event-corner tracker and with no compromise on the feature tracking lifetime.

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1475 ◽  
Author(s):  
Jingyun Duo ◽  
Long Zhao

Event cameras have many advantages over conventional frame-based cameras, such as high temporal resolution, low latency and high dynamic range. However, state-of-the-art event- based algorithms either require too much computation time or have poor accuracy performance. In this paper, we propose an asynchronous real-time corner extraction and tracking algorithm for an event camera. Our primary motivation focuses on enhancing the accuracy of corner detection and tracking while ensuring computational efficiency. Firstly, according to the polarities of the events, a simple yet effective filter is applied to construct two restrictive Surface of Active Events (SAEs), named as RSAE+ and RSAE−, which can accurately represent high contrast patterns; meanwhile it filters noises and redundant events. Afterwards, a new coarse-to-fine corner extractor is proposed to extract corner events efficiently and accurately. Finally, a space, time and velocity direction constrained data association method is presented to realize corner event tracking, and we associate a new arriving corner event with the latest active corner that satisfies the velocity direction constraint in its neighborhood. The experiments are run on a standard event camera dataset, and the experimental results indicate that our method achieves excellent corner detection and tracking performance. Moreover, the proposed method can process more than 4.5 million events per second, showing promising potential in real-time computer vision applications.


2011 ◽  
Vol 65 ◽  
pp. 497-502
Author(s):  
Yan Wei Wang ◽  
Hui Li Yu

A feature matching algorithm based on wavelet transform and SIFT is proposed in this paper, Firstly, Biorthogonal wavelet transforms algorithm is used for medical image to delaminating, and restoration the processed image. Then the SIFT (Scale Invariant Feature Transform) applied in this paper to abstracting key point. Experimental results show that our algorithm compares favorably in high-compressive ratio, the rapid matching speed and low storage of the image, especially for the tilt and rotation conditions.


2010 ◽  
Vol 9 (4) ◽  
pp. 29-34 ◽  
Author(s):  
Achim Weimert ◽  
Xueting Tan ◽  
Xubo Yang

In this paper, we present a novel feature detection approach designed for mobile devices, showing optimized solutions for both detection and description. It is based on FAST (Features from Accelerated Segment Test) and named 3D FAST. Being robust, scale-invariant and easy to compute, it is a candidate for augmented reality (AR) applications running on low performance platforms. Using simple calculations and machine learning, FAST is a feature detection algorithm known to be efficient but not very robust in addition to its lack of scale information. Our approach relies on gradient images calculated for different scale levels on which a modified9 FAST algorithm operates to obtain the values of the corner response function. We combine the detection with an adapted version of SURF (Speed Up Robust Features) descriptors, providing a system with all means to implement feature matching and object detection. Experimental evaluation on a Symbian OS device using a standard image set and comparison with SURF using Hessian matrix-based detector is included in this paper, showing improvements in speed (compared to SURF) and robustness (compared to FAST)


2021 ◽  
Vol 8 (3) ◽  
pp. 1012-1026
Author(s):  
Adam Raihan Fadhlurahman

Pada saat ini diseluruh bagian dunia sedang terlanda sebuah virus yaitu COVID-19 yang membuat setiap orang tidak dapat berpergian ke luar negri dengan mudah. Namun di era sudah canggih ini semua informasi dapat disajikan secara digitalisasi mulai dalam bentuk tulisan maupun gambar dan dapat juga ditampilkan berbentuk objek 3 dimensi dibantu dengan Augmented Reality untuk menampilkannya. Peneleitian ini memanfaatkan teknologi Augmented Reality digunakan untuk alat pengenalan landmark dari negara Asia Tenggara agar tampilan objek akan menjadi lebih atraktif dan dapat mengetahui informasi tentang landmark dari setiap negara dengan mudah. Dalam penelitian ini aplikasi Augmented Reality dibuat menggunakan FAST Corner Detection (FCD) algoritma dan Natural Feature Tracking (NFT) untuk mendeteksi marker. Pengujian dari aplikasi ini dilakukan pada tiga perangkat smartphone android, pada sudut kemiringan 20° - 90° ketiga smartpone dapat mendeteksi marker menggunakan kameranya dan menampilkan objek 3 dimensi sesuai dengan landmark yang dipilih pada layar smartphone. Untuk pengukuran jarak maximal yang dapat dibaca oleh handphone rata-rata ±75cm dan jarak minimun pendeteksian yaitu ±10cm.


2021 ◽  
Author(s):  
Dominik Fahrner ◽  
James Lea ◽  
Stephen Brough ◽  
Jakob Abermann

<p>Greenland’s tidewater glaciers (TWG) have been retreating since the mid-1990s, contributing to mass loss from the Greenland Ice Sheet and sea level rise. Satellite imagery has been widely used to investigate TWG behaviour and determine the response of TWGs to climate. However, multi-day revisit times make it difficult to determine short-term processes such as calving and shorter-term velocity changes that may condition this. </p><p>Here we present velocity, calving and proglacial plume data derived from hourly time-lapse images of Narsap Sermia, SW Greenland for the period July 2017 to June 2020 (n=13,513). Raw images were orthorectified using the <em>Image GeoRectification And Feature Tracking toolbox</em> (ImGRAFT; Messerli & Grinsted, 2015) using a smoothed ArcticDEM tile from 2016 (RMSE=44.4px). TWG flow velocities were determined using ImGRAFT feature tracking, with post-processing adjusting for varying time intervals between image acquisitions (if >1 hour) and removing outliers (>x2 mean). The high temporal resolution of the imagery also enabled the manual mapping of proglacial plume sizes from the orthorectified images and the recording of individual calving events by visually comparing images.</p><p>Results show a total retreat of approximately 700 m, with a general velocity increase from ~15 m/d to ~20 m/d over the investigated time period and highly variable hourly velocities (±12m/d). The number of calving events and plume sizes remain relatively stable from year to year throughout the observation period. However, later in the record plumes appear earlier in the year and the size of calved icebergs increases significantly, which suggests a change in calving behaviour. </p>


2020 ◽  
Vol 6 (4) ◽  
pp. 25
Author(s):  
Nahlah Algethami ◽  
Sam Redfern

We propose a tracking-by-detection algorithm to track the movements of meeting participants from an overhead camera. An advantage of using overhead cameras is that all objects can typically be seen clearly, with little occlusion; however, detecting people from a wide-angle overhead view also poses challenges such as people’s appearance significantly changing due to their position in the wide-angle image, and generally from a lack of strong image features. Our experimental datasets do not include empty meeting rooms, and this means that standard motion based detection techniques (e.g., background subtraction or consecutive frame differencing) struggle since there is no prior knowledge for a background model. Additionally, standard techniques may perform poorly when there is a wide range of movement behaviours (e.g. periods of no movement and periods of fast movement), as is often the case in meetings. Our algorithm uses a novel coarse-to-fine detection and tracking approach, combining motion detection using adaptive accumulated frame differencing (AAFD) with Shi-Tomasi corner detection. We present quantitative and qualitative evaluation which demonstrates the robustness of our method to track people in environments where object features are not clear and have similar colour to the background. We show that our approach achieves excellent performance in terms of the multiple object tracking accuracy (MOTA) metrics, and that it is particularly robust to initialisation differences when compared with baseline and state of the art trackers. Using the Online Tracking Benchmark (OTB) videos we also demonstrate that our tracker is very strong in the presence of background clutter, deformation and illumination variation.


2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Qiang Zhang ◽  
Daokui Qu ◽  
Fang Xu ◽  
Kai Jia ◽  
Xueying Sun

This paper introduces a dual-layer density estimation-based architecture for multiple object instance detection in robot inventory management applications. The approach consists of raw scale-invariant feature transform (SIFT) feature matching and key point projection. The dominant scale ratio and a reference clustering threshold are estimated using the first layer of the density estimation. A cascade of filters is applied after feature template reconstruction and refined feature matching to eliminate false matches. Before the second layer of density estimation, the adaptive threshold is finalized by multiplying an empirical coefficient for the reference value. The coefficient is identified experimentally. Adaptive threshold-based grid voting is applied to find all candidate object instances. Error detection is eliminated using final geometric verification in accordance with Random Sample Consensus (RANSAC). The detection results of the proposed approach are evaluated on a self-built dataset collected in a supermarket. The results demonstrate that the approach provides high robustness and low latency for inventory management application.


Sign in / Sign up

Export Citation Format

Share Document