scholarly journals The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM

2017 ◽  
Vol 36 (2) ◽  
pp. 142-149 ◽  
Author(s):  
Elias Mueggler ◽  
Henri Rebecq ◽  
Guillermo Gallego ◽  
Tobi Delbruck ◽  
Davide Scaramuzza

New vision sensors, such as the dynamic and active-pixel vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array. These sensors have great potential for high-speed robotics and computer vision because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and very high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called “events”) and synchronous grayscale frames. For this purpose, we present and release a collection of datasets captured with a DAVIS in a variety of synthetic and real environments, which we hope will motivate research on new algorithms for high-speed and high-dynamic-range robotics and computer-vision applications. In addition to global-shutter intensity images and asynchronous events, we provide inertial measurements and ground-truth camera poses from a motion-capture system. The latter allows comparing the pose accuracy of ego-motion estimation algorithms quantitatively. All the data are released both as standard text files and binary files (i.e. rosbag). This paper provides an overview of the available data and describes a simulator that we release open-source to create synthetic event-camera data.

2021 ◽  
Author(s):  
Shixiong Zhang ◽  
Wenmin Wang

<div>Event-based vision is a novel bio-inspired vision that has attracted the interest of many researchers. As a neuromorphic vision, the sensor is different from the traditional frame-based cameras. It has such advantages that conventional frame-based cameras can’t match, e.g., high temporal resolution, high dynamic range(HDR), sparse and minimal motion blur. Recently, a lot of computer vision approaches have been proposed with demonstrated success. However, there is a lack of some general methods to expand the scope of the application of event-based vision. To be able to effectively bridge the gap between conventional computer vision and event-based vision, in this paper, we propose an adaptable framework for object detection in event-based vision.</div>


2021 ◽  
Author(s):  
Shixiong Zhang ◽  
Wenmin Wang

<div>Event-based vision is a novel bio-inspired vision that has attracted the interest of many researchers. As a neuromorphic vision, the sensor is different from the traditional frame-based cameras. It has such advantages that conventional frame-based cameras can’t match, e.g., high temporal resolution, high dynamic range(HDR), sparse and minimal motion blur. Recently, a lot of computer vision approaches have been proposed with demonstrated success. However, there is a lack of some general methods to expand the scope of the application of event-based vision. To be able to effectively bridge the gap between conventional computer vision and event-based vision, in this paper, we propose an adaptable framework for object detection in event-based vision.</div>


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1475 ◽  
Author(s):  
Jingyun Duo ◽  
Long Zhao

Event cameras have many advantages over conventional frame-based cameras, such as high temporal resolution, low latency and high dynamic range. However, state-of-the-art event- based algorithms either require too much computation time or have poor accuracy performance. In this paper, we propose an asynchronous real-time corner extraction and tracking algorithm for an event camera. Our primary motivation focuses on enhancing the accuracy of corner detection and tracking while ensuring computational efficiency. Firstly, according to the polarities of the events, a simple yet effective filter is applied to construct two restrictive Surface of Active Events (SAEs), named as RSAE+ and RSAE−, which can accurately represent high contrast patterns; meanwhile it filters noises and redundant events. Afterwards, a new coarse-to-fine corner extractor is proposed to extract corner events efficiently and accurately. Finally, a space, time and velocity direction constrained data association method is presented to realize corner event tracking, and we associate a new arriving corner event with the latest active corner that satisfies the velocity direction constraint in its neighborhood. The experiments are run on a standard event camera dataset, and the experimental results indicate that our method achieves excellent corner detection and tracking performance. Moreover, the proposed method can process more than 4.5 million events per second, showing promising potential in real-time computer vision applications.


2009 ◽  
Vol 56 (3) ◽  
pp. 1069-1075 ◽  
Author(s):  
Stuart Kleinfelder ◽  
Shiuh-Hua Wood Chiang ◽  
Wei Huang ◽  
Ashish Shah ◽  
Kris Kwiatkowski

2012 ◽  
Author(s):  
Francisco Jiménez-Garrido ◽  
José Fernández-Pérez ◽  
Cayetana Utrera ◽  
José Ma. Muñoz ◽  
Ma. Dolores Pardo ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document