A scalable pipelined architecture for biomimetic vision sensors

Author(s):  
Daniel Llamocca ◽  
Brian K. Dean
Author(s):  
Jawad N. Yasin ◽  
Sherif A. S. Mohamed ◽  
Mohammad-hashem Haghbayan ◽  
Jukka Heikkonen ◽  
Hannu Tenhunen ◽  
...  

Author(s):  
Sihao Sun ◽  
Giovanni Cioffi ◽  
Coen De Visser ◽  
Davide Scaramuzza
Keyword(s):  

Author(s):  
Sherif A.S. Mohamed ◽  
Jawad N. Yasin ◽  
Mohammad-Hashem Haghbayan ◽  
Antonio Miele ◽  
Jukka Heikkonen ◽  
...  

2021 ◽  
Vol 13 (13) ◽  
pp. 2643
Author(s):  
Dário Pedro ◽  
João P. Matos-Carvalho ◽  
José M. Fonseca ◽  
André Mora

Unmanned Autonomous Vehicles (UAV), while not a recent invention, have recently acquired a prominent position in many industries, and they are increasingly used not only by avid customers, but also in high-demand technical use-cases, and will have a significant societal effect in the coming years. However, the use of UAVs is fraught with significant safety threats, such as collisions with dynamic obstacles (other UAVs, birds, or randomly thrown objects). This research focuses on a safety problem that is often overlooked due to a lack of technology and solutions to address it: collisions with non-stationary objects. A novel approach is described that employs deep learning techniques to solve the computationally intensive problem of real-time collision avoidance with dynamic objects using off-the-shelf commercial vision sensors. The suggested approach’s viability was corroborated by multiple experiments, firstly in simulation, and afterward in a concrete real-world case, that consists of dodging a thrown ball. A novel video dataset was created and made available for this purpose, and transfer learning was also tested, with positive results.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4958
Author(s):  
Hicham Hadj-Abdelkader ◽  
Omar Tahri ◽  
Houssem-Eddine Benseddik

Photometric moments are global descriptors of an image that can be used to recover motion information. This paper uses spherical photometric moments for a closed form estimation of 3D rotations from images. Since the used descriptors are global and not of the geometrical kind, they allow to avoid image processing as features extraction, matching, and tracking. The proposed scheme based on spherical projection can be used for the different vision sensors obeying the central unified model: conventional, fisheye, and catadioptric. Experimental results using both synthetic data and real images in different scenarios are provided to show the efficiency of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document