Event-Based Hough Transform in a Spiking Neural Network for Multiple Line Detection and Tracking Using a Dynamic Vision Sensor

Author(s):  
Sajjad Seifozzakerini ◽  
Wei-Yun Yau ◽  
Bo Zhao ◽  
Kezhi Mao
2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Marc Osswald ◽  
Sio-Hoi Ieng ◽  
Ryad Benosman ◽  
Giacomo Indiveri

Abstract Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.


2020 ◽  
pp. short17-1-short17-8
Author(s):  
Fedor Shvetsov ◽  
Anton Konushin ◽  
Anna Sokolova

In this work, we consider the applicability of the face recognition algorithms to the data obtained from a dynamic vision sensor. A basic method using a neural network model comprised of reconstruction, detection, and recognition is proposed that solves this problem. Various modifications of this algorithm and their influence on the quality of the model are considered. A small test dataset recorded on a DVS sensor is collected. The relevance of using simulated data and different approaches for its creation for training a model was investigated. The portability of the algorithm trained on synthetic data to the data obtained from the sensor with the help of fine-tuning was considered. All mentioned variations are compared to one another and also compared with conventional face recognition from RGB images on different datasets. The results showed that it is possible to use DVS data to perform face recognition with quality similar to that of RGB data.


2021 ◽  
pp. 3-10
Author(s):  
Yong Wang ◽  
Xian Zhang ◽  
Yanxiang Wang ◽  
Hongbin Wang ◽  
Chanying Huang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document