scholarly journals AFOD: An Adaptable Framework for Object Detection in Event-based Vision

Author(s):  
Shixiong Zhang ◽  
Wenmin Wang

<div>Event-based vision is a novel bio-inspired vision that has attracted the interest of many researchers. As a neuromorphic vision, the sensor is different from the traditional frame-based cameras. It has such advantages that conventional frame-based cameras can’t match, e.g., high temporal resolution, high dynamic range(HDR), sparse and minimal motion blur. Recently, a lot of computer vision approaches have been proposed with demonstrated success. However, there is a lack of some general methods to expand the scope of the application of event-based vision. To be able to effectively bridge the gap between conventional computer vision and event-based vision, in this paper, we propose an adaptable framework for object detection in event-based vision.</div>

2021 ◽  
Author(s):  
Shixiong Zhang ◽  
Wenmin Wang

<div>Event-based vision is a novel bio-inspired vision that has attracted the interest of many researchers. As a neuromorphic vision, the sensor is different from the traditional frame-based cameras. It has such advantages that conventional frame-based cameras can’t match, e.g., high temporal resolution, high dynamic range(HDR), sparse and minimal motion blur. Recently, a lot of computer vision approaches have been proposed with demonstrated success. However, there is a lack of some general methods to expand the scope of the application of event-based vision. To be able to effectively bridge the gap between conventional computer vision and event-based vision, in this paper, we propose an adaptable framework for object detection in event-based vision.</div>


2017 ◽  
Vol 36 (2) ◽  
pp. 142-149 ◽  
Author(s):  
Elias Mueggler ◽  
Henri Rebecq ◽  
Guillermo Gallego ◽  
Tobi Delbruck ◽  
Davide Scaramuzza

New vision sensors, such as the dynamic and active-pixel vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array. These sensors have great potential for high-speed robotics and computer vision because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and very high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called “events”) and synchronous grayscale frames. For this purpose, we present and release a collection of datasets captured with a DAVIS in a variety of synthetic and real environments, which we hope will motivate research on new algorithms for high-speed and high-dynamic-range robotics and computer-vision applications. In addition to global-shutter intensity images and asynchronous events, we provide inertial measurements and ground-truth camera poses from a motion-capture system. The latter allows comparing the pose accuracy of ego-motion estimation algorithms quantitatively. All the data are released both as standard text files and binary files (i.e. rosbag). This paper provides an overview of the available data and describes a simulator that we release open-source to create synthetic event-camera data.


Author(s):  
Andrew Gothard ◽  
Daniel Jones ◽  
Andre Green ◽  
Michael Torrez ◽  
Alessandro Cattaneo ◽  
...  

Abstract Event-driven neuromorphic imagers have a number of attractive properties including low-power consumption, high dynamic range, the ability to detect fast events, low memory consumption and low band-width requirements. One of the biggest challenges with using event-driven imagery is that the field of event data processing is still embryonic. In contrast, decades worth of effort have been invested in the analysis of frame-based imagery. Hybrid approaches for applying established frame-based analysis techniques to event-driven imagery have been studied since event-driven imagers came into existence. However, the process for forming frames from event-driven imagery has not been studied in detail. This work presents a principled digital coded exposure approach for forming frames from event-driven imagery that is inspired by the physics exploited in a conventional camera featuring a shutter. The technique described in this work provides a fundamental tool for understanding the temporal information content that contributes to the formation of a frame from event-driven imagery data. Event-driven imagery allows for the application of arbitrary virtual digital shutter functions to form the final frame on a pixel-by-pixel basis. The proposed approach allows for the careful control of the spatio-temporal information that is captured in the frame. Furthermore, unlike a conventional physical camera, event-driven imagery can be formed into any variety of possible frames in post-processing after the data is captured. Furthermore, unlike a conventional physical camera, coded-exposure virtual shutter functions can assume arbitrary values including positive, negative, real, and complex values. The coded exposure approach also enables the ability to perform applications of industrial interest such as digital stroboscopy without any additional hardware. The ability to form frames from event-driven imagery in a principled manner opens up new possibilities in the ability to use conventional frame-based image processing techniques on event-driven imagery.


Author(s):  
Param Hanji ◽  
Muhammad Z. Alam ◽  
Nicola Giuliani ◽  
Hu Chen ◽  
Rafał K. Mantiuk

Benchmark datasets used for testing computer vision (CV) methods often contain little variation in illumination. The methods that perform well on these datasets have been observed to fail under challenging illumination conditions encountered in the real world, in particular, when the dynamic range of a scene is high. The authors present a new dataset for evaluating CV methods in challenging illumination conditions such as low light, high dynamic range, and glare. The main feature of the dataset is that each scene has been captured in all the adversarial illuminations. Moreover, each scene includes an additional reference condition with uniform illumination, which can be used to automatically generate labels for the tested CV methods. We demonstrate the usefulness of the dataset in a preliminary study by evaluating the performance of popular face detection, optical flow, and object detection methods under adversarial illumination conditions. We further assess whether the performance of these applications can be improved if a different transfer function is used.


2021 ◽  
Vol 15 ◽  
Author(s):  
Lakshmi Annamalai ◽  
Anirban Chakraborty ◽  
Chetan Singh Thakur

Event-based cameras are bio-inspired novel sensors that asynchronously record changes in illumination in the form of events. This principle results in significant advantages over conventional cameras, such as low power utilization, high dynamic range, and no motion blur. Moreover, by design, such cameras encode only the relative motion between the scene and the sensor and not the static background to yield a very sparse data structure. In this paper, we leverage these advantages of an event camera toward a critical vision application—video anomaly detection. We propose an anomaly detection solution in the event domain with a conditional Generative Adversarial Network (cGAN) made up of sparse submanifold convolution layers. Video analytics tasks such as anomaly detection depend on the motion history at each pixel. To enable this, we also put forward a generic unsupervised deep learning solution to learn a novel memory surface known as Deep Learning (DL) memory surface. DL memory surface encodes the temporal information readily available from these sensors while retaining the sparsity of event data. Since there is no existing dataset for anomaly detection in the event domain, we also provide an anomaly detection event dataset with a set of anomalies. We empirically validate our anomaly detection architecture, composed of sparse convolutional layers, on this proposed and online dataset. Careful analysis of the anomaly detection network reveals that the presented method results in a massive reduction in computational complexity with good performance compared to previous state-of-the-art conventional frame-based anomaly detection networks.


2019 ◽  
Vol 9 (21) ◽  
pp. 4658
Author(s):  
Ho-Hyoung Choi ◽  
Hyun-Soo Kang ◽  
Byoung-Ju Yun

One of the significant qualities of the human vision, which differentiates it from computer vision, is so called attentional control, which is the innate ability of our human eyes to select what visual stimuli to pay attention to at any moment in time. In this sense, the visual salience detection model, which is designed to simulate how the human visual system (HVS) perceives objects and scenes, is widely used for performing multiple vision tasks. This model is also in high demand in the tone mapping technology of high dynamic range images (HDRIs). Another distinct quality of the HVS is that our eyes blink and adjust brightness when objects are in their sight. Likewise, HDR imaging is a technology applied to a camera that takes pictures of an object several times by repeatedly opening and closing a camera iris, which is referred to as multiple exposures. In this way, the computer vision is able to control brightness and depict a range of light intensities. HDRIs are the product of HDR imaging. This article proposes a novel tone mapping method using CCH-based saliency-aware weighting and edge-aware weighting methods to efficiently detect image salience information in the given HDRIs. The two weighting methods combine with a guided filter to generate a modified guided image filter (MGIF). The function of the MGIF is to split an image into the base layer and the detail layer which are the two elements of an image: illumination and reflection, respectively. The base layer is used to obtain global tone mapping and compress the dynamic range of HDRI while preserving the sharp edges of an object in the HDRI. This has a remarkable effect of reducing halos in the resulting HDRIs. The proposed approach in this article also has several distinct advantages of discriminative operation, tolerance to image size variation, and minimized parameter tuning. According to the experimental results, the proposed method has made progress compared to its existing counterparts when it comes to subjective and quantitative qualities, and color reproduction.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1475 ◽  
Author(s):  
Jingyun Duo ◽  
Long Zhao

Event cameras have many advantages over conventional frame-based cameras, such as high temporal resolution, low latency and high dynamic range. However, state-of-the-art event- based algorithms either require too much computation time or have poor accuracy performance. In this paper, we propose an asynchronous real-time corner extraction and tracking algorithm for an event camera. Our primary motivation focuses on enhancing the accuracy of corner detection and tracking while ensuring computational efficiency. Firstly, according to the polarities of the events, a simple yet effective filter is applied to construct two restrictive Surface of Active Events (SAEs), named as RSAE+ and RSAE−, which can accurately represent high contrast patterns; meanwhile it filters noises and redundant events. Afterwards, a new coarse-to-fine corner extractor is proposed to extract corner events efficiently and accurately. Finally, a space, time and velocity direction constrained data association method is presented to realize corner event tracking, and we associate a new arriving corner event with the latest active corner that satisfies the velocity direction constraint in its neighborhood. The experiments are run on a standard event camera dataset, and the experimental results indicate that our method achieves excellent corner detection and tracking performance. Moreover, the proposed method can process more than 4.5 million events per second, showing promising potential in real-time computer vision applications.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Ratnajit Mukherjee ◽  
Maximino Bessa ◽  
Pedro Melo-Pinto ◽  
Alan Chalmers

Sign in / Sign up

Export Citation Format

Share Document