What Can Neuromorphic Event-Driven Precise Timing Add to Spike-Based Pattern Recognition?

2015 ◽  
Vol 27 (3) ◽  
pp. 561-593 ◽  
Author(s):  
Himanshu Akolkar ◽  
Cedric Meyer ◽  
Xavier Clady ◽  
Olivier Marre ◽  
Chiara Bartolozzi ◽  
...  

This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time—pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30–60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.

2021 ◽  
Author(s):  
Bruce C. Hansen ◽  
Michelle R. Greene ◽  
David J. Field

AbstractA chief goal of systems neuroscience is to understand how the brain encodes information in our visual environments. Understanding that neural code is crucial to explaining how visual content is transformed via subsequent semantic representations to enable intelligent behavior. Although the visual code is not static, this reality is often obscured in voxel-wise encoding models of BOLD signals due to fMRI’s poor temporal resolution. We leveraged the high temporal resolution of EEG to develop an encoding technique based in state-space theory. This approach maps neural signals to each pixel within a given image and reveals location-specific transformations of the visual code, providing a spatiotemporal signature for the image at each electrode. This technique offers a spatiotemporal visualization of the evolution of the neural code of visual information thought impossible to obtain from EEG and promises to provide insight into how visual meaning is developed through dynamic feedforward and recurrent processes.


2021 ◽  
Vol 17 (9) ◽  
pp. e1009456
Author(s):  
Bruce C. Hansen ◽  
Michelle R. Greene ◽  
David J. Field

A number of neuroimaging techniques have been employed to understand how visual information is transformed along the visual pathway. Although each technique has spatial and temporal limitations, they can each provide important insights into the visual code. While the BOLD signal of fMRI can be quite informative, the visual code is not static and this can be obscured by fMRI’s poor temporal resolution. In this study, we leveraged the high temporal resolution of EEG to develop an encoding technique based on the distribution of responses generated by a population of real-world scenes. This approach maps neural signals to each pixel within a given image and reveals location-specific transformations of the visual code, providing a spatiotemporal signature for the image at each electrode. Our analyses of the mapping results revealed that scenes undergo a series of nonuniform transformations that prioritize different spatial frequencies at different regions of scenes over time. This mapping technique offers a potential avenue for future studies to explore how dynamic feedforward and recurrent processes inform and refine high-level representations of our visual world.


2021 ◽  
Author(s):  
Shixiong Zhang ◽  
Wenmin Wang

<div>Event-based vision is a novel bio-inspired vision that has attracted the interest of many researchers. As a neuromorphic vision, the sensor is different from the traditional frame-based cameras. It has such advantages that conventional frame-based cameras can’t match, e.g., high temporal resolution, high dynamic range(HDR), sparse and minimal motion blur. Recently, a lot of computer vision approaches have been proposed with demonstrated success. However, there is a lack of some general methods to expand the scope of the application of event-based vision. To be able to effectively bridge the gap between conventional computer vision and event-based vision, in this paper, we propose an adaptable framework for object detection in event-based vision.</div>


2018 ◽  
Author(s):  
Yalda Mohsenzadeh ◽  
Caitlin Mullin ◽  
Aude Oliva ◽  
Dimitrios Pantazis

ABSTRACTSome scenes are more memorable than others: they cement in minds with consistencies across observers and time scales. While memory mechanisms are traditionally associated with the end stages of perception, recent behavioral studies suggest that the features driving these memorability effects are extracted early on, and in an automatic fashion. This raises the question: is the neural signal of memorability detectable during early perceptual encoding phases of visual processing? Using the high temporal resolution of magnetoencephalography (MEG), during a rapid serial visual presentation (RSVP) task, we traced the neural temporal signature of memorability across the brain. We found an early and prolonged memorability related signal recruiting a network of regions in both dorsal and ventral streams, detected outside of the constraints of subjective awareness. This enhanced encoding could be the key to successful storage and recognition.


2021 ◽  
Author(s):  
Shixiong Zhang ◽  
Wenmin Wang

<div>Event-based vision is a novel bio-inspired vision that has attracted the interest of many researchers. As a neuromorphic vision, the sensor is different from the traditional frame-based cameras. It has such advantages that conventional frame-based cameras can’t match, e.g., high temporal resolution, high dynamic range(HDR), sparse and minimal motion blur. Recently, a lot of computer vision approaches have been proposed with demonstrated success. However, there is a lack of some general methods to expand the scope of the application of event-based vision. To be able to effectively bridge the gap between conventional computer vision and event-based vision, in this paper, we propose an adaptable framework for object detection in event-based vision.</div>


2020 ◽  
Author(s):  
Veronika Koren ◽  
Ariana R. Andrei ◽  
Ming Hu ◽  
Valentin Dragoi ◽  
Klaus Obermayer

AbstractPrimary visual cortex (V1) is absolutely necessary for normal visual processing, but whether V1 encodes upcoming behavioral decisions based on visual information is an unresolved issue, with conflicting evidence. Further, no study so far has been able to predict choice from time-resolved spiking activity in V1. Here, we hypothesized that the choice cannot be decoded with classical decoding schemes due to the noise in incorrect trials, but it might be possible to decode with generalized learning. We trained the decoder in the presence of the information on both the stimulus class and the correct behavioral choice. The learned structure of population responses was then utilized to decode trials that differ in the choice alone. We show that with such generalized learning scheme, the choice can be successfully predicted from spiking activity of neural ensembles in V1 in single trials, relying on the partial overlap in the representation between the stimuli and the choice. In addition, we show that the representation of the choice is primarily carried by bursting neurons in the superficial layer of the cortex. We demonstrated how bursting of single neurons and noise correlations between neurons with similar decoding selectivity helps the accumulation of the choice signal.HighlightsThe choice can be predicted from the spiking activity in the cortical column of V1 of the macaque.The information on choice and on stimuli is partially overlapping.Bursty neurons in the superficial layer of the cortex are the principal carriers of the choice signal.Correlated spike timing between neurons with similar decoding selectivity helps encoding.


2010 ◽  
Vol 6 (2) ◽  
pp. 43 ◽  
Author(s):  
Andreas H Mahnken ◽  

Over the last decade, cardiac computed tomography (CT) technology has experienced revolutionary changes and gained broad clinical acceptance in the work-up of patients suffering from coronary artery disease (CAD). Since cardiac multidetector-row CT (MDCT) was introduced in 1998, acquisition time, number of detector rows and spatial and temporal resolution have improved tremendously. Current developments in cardiac CT are focusing on low-dose cardiac scanning at ultra-high temporal resolution. Technically, there are two major approaches to achieving these goals: rapid data acquisition using dual-source CT scanners with high temporal resolution or volumetric data acquisition with 256/320-slice CT scanners. While each approach has specific advantages and disadvantages, both technologies foster the extension of cardiac MDCT beyond morphological imaging towards the functional assessment of CAD. This article examines current trends in the development of cardiac MDCT.


Sign in / Sign up

Export Citation Format

Share Document