scholarly journals A robust event-driven approach to always-on object recognition

Author(s):  
Antoine Grimaldi ◽  
Victor Boutin ◽  
Sio-Hoi Ieng ◽  
Ryad Benosman ◽  
Laurent Perrinet

<div> <div> <div> <p>We propose a neuromimetic architecture able to perform always-on pattern recognition. To achieve this, we extended an existing event-based algorithm [1], which introduced novel spatio-temporal features as a Hierarchy Of Time-Surfaces (HOTS). Built from asynchronous events acquired by a neuromorphic camera, these time surfaces allow to code the local dynamics of a visual scene and to create an efficient event-based pattern recognition architecture. Inspired by neuroscience, we extended this method to increase its performance. Our first contribution was to add a homeostatic gain control on the activity of neurons to improve the learning of spatio-temporal patterns [2]. A second contribution is to draw an analogy between the HOTS algorithm and Spiking Neural Networks (SNN). Following that analogy, our last contribution is to modify the classification layer and remodel the offline pattern categorization method previously used into an online and event-driven one. This classifier uses the spiking output of the network to define novel time surfaces and we then perform online classification with a neuromimetic implementation of a multinomial logistic regression. Not only do these improvements increase consistently the performances of the network, they also make this event-driven pattern recognition algorithm online and bio-realistic. Results were validated on different datasets: DVS barrel [3], Poker-DVS [4] and N-MNIST [5]. We foresee to develop the SNN version of the method and to extend this fully event-driven approach to more naturalistic tasks, notably for always-on, ultra-fast object categorization. </p> </div> </div> </div>

2022 ◽  
Author(s):  
Antoine Grimaldi ◽  
Victor Boutin ◽  
Sio-Hoi Ieng ◽  
Ryad Benosman ◽  
Laurent Perrinet

<div> <div> <div> <p>We propose a neuromimetic architecture able to perform always-on pattern recognition. To achieve this, we extended an existing event-based algorithm [1], which introduced novel spatio-temporal features as a Hierarchy Of Time-Surfaces (HOTS). Built from asynchronous events acquired by a neuromorphic camera, these time surfaces allow to code the local dynamics of a visual scene and to create an efficient event-based pattern recognition architecture. Inspired by neuroscience, we extended this method to increase its performance. Our first contribution was to add a homeostatic gain control on the activity of neurons to improve the learning of spatio-temporal patterns [2]. A second contribution is to draw an analogy between the HOTS algorithm and Spiking Neural Networks (SNN). Following that analogy, our last contribution is to modify the classification layer and remodel the offline pattern categorization method previously used into an online and event-driven one. This classifier uses the spiking output of the network to define novel time surfaces and we then perform online classification with a neuromimetic implementation of a multinomial logistic regression. Not only do these improvements increase consistently the performances of the network, they also make this event-driven pattern recognition algorithm online and bio-realistic. Results were validated on different datasets: DVS barrel [3], Poker-DVS [4] and N-MNIST [5]. We foresee to develop the SNN version of the method and to extend this fully event-driven approach to more naturalistic tasks, notably for always-on, ultra-fast object categorization. </p> </div> </div> </div>


Author(s):  
Andrew Gothard ◽  
Daniel Jones ◽  
Andre Green ◽  
Michael Torrez ◽  
Alessandro Cattaneo ◽  
...  

Abstract Event-driven neuromorphic imagers have a number of attractive properties including low-power consumption, high dynamic range, the ability to detect fast events, low memory consumption and low band-width requirements. One of the biggest challenges with using event-driven imagery is that the field of event data processing is still embryonic. In contrast, decades worth of effort have been invested in the analysis of frame-based imagery. Hybrid approaches for applying established frame-based analysis techniques to event-driven imagery have been studied since event-driven imagers came into existence. However, the process for forming frames from event-driven imagery has not been studied in detail. This work presents a principled digital coded exposure approach for forming frames from event-driven imagery that is inspired by the physics exploited in a conventional camera featuring a shutter. The technique described in this work provides a fundamental tool for understanding the temporal information content that contributes to the formation of a frame from event-driven imagery data. Event-driven imagery allows for the application of arbitrary virtual digital shutter functions to form the final frame on a pixel-by-pixel basis. The proposed approach allows for the careful control of the spatio-temporal information that is captured in the frame. Furthermore, unlike a conventional physical camera, event-driven imagery can be formed into any variety of possible frames in post-processing after the data is captured. Furthermore, unlike a conventional physical camera, coded-exposure virtual shutter functions can assume arbitrary values including positive, negative, real, and complex values. The coded exposure approach also enables the ability to perform applications of industrial interest such as digital stroboscopy without any additional hardware. The ability to form frames from event-driven imagery in a principled manner opens up new possibilities in the ability to use conventional frame-based image processing techniques on event-driven imagery.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Xinfang Chen ◽  
Venkata Dinavahi

With the rapid growth of population, more diverse crowd activities, and the rapid development of socialization process, group scenes are becoming more common, so the demand for modeling, analyzing, and understanding group behavior data in video is increasing. Compared with the previous work on video content analysis, factors such as the increasing number of people in the group video and the more complex scene make the analysis of group behavior in video face great challenges. Therefore, a group behavior pattern recognition algorithm based on spatio-temporal graph convolutional network is proposed in this paper, aiming at group density analysis and group behavior recognition in the video. A crowd detection and location method based on density map regression-guided classification was designed. Finally, a crowd behavior analysis method based on density grade division was designed to complete crowd density analysis and video group behavior detection. In addition, this paper also proposes to extract spatio-temporal features of crowd posture and density by using the double-flow spatio-temporal map network model, so as to effectively capture the differentiated movement information among different groups. Experimental results on public datasets show that the proposed method has high accuracy and can effectively predict group behavior.


2021 ◽  
Author(s):  
Monir Torabian ◽  
Hossein Pourghassem ◽  
Homayoun Mahdavi-Nasab

Sign in / Sign up

Export Citation Format

Share Document