event camera
Recently Published Documents


TOTAL DOCUMENTS

67
(FIVE YEARS 55)

H-INDEX

10
(FIVE YEARS 4)

Author(s):  
Xiaoqian Huang ◽  
Mohamad Halwani ◽  
Rajkumar Muthusamy ◽  
Abdulla Ayyad ◽  
Dewald Swart ◽  
...  

AbstractRobotic vision plays a key role for perceiving the environment in grasping applications. However, the conventional framed-based robotic vision, suffering from motion blur and low sampling rate, may not meet the automation needs of evolving industrial requirements. This paper, for the first time, proposes an event-based robotic grasping framework for multiple known and unknown objects in a cluttered scene. With advantages of microsecond-level sampling rate and no motion blur of event camera, the model-based and model-free approaches are developed for known and unknown objects’ grasping respectively. The event-based multi-view approach is used to localize the objects in the scene in the model-based approach, and then point cloud processing is utilized to cluster and register the objects. The proposed model-free approach, on the other hand, utilizes the developed event-based object segmentation, visual servoing and grasp planning to localize, align to, and grasp the targeting object. Using a UR10 robot with an eye-in-hand neuromorphic camera and a Barrett hand gripper, the proposed approaches are experimentally validated with objects of different sizes. Furthermore, it demonstrates robustness and a significant advantage over grasping with a traditional frame-based camera in low-light conditions.


Author(s):  
Md Jubaer Hossain Pantho ◽  
Joel Mandebi Mbongue ◽  
Pankaj Bhowmik ◽  
Christophe Bobda

2021 ◽  
Vol 1 (2) ◽  
pp. 024004
Author(s):  
Stephen J Maybank ◽  
Sio-Hoi Ieng ◽  
Davide Migliore ◽  
Ryad Benosman

Abstract The optical flow in an event camera is estimated using measurements in the address event representation (AER). Each measurement consists of a pixel address and the time at which a change in the pixel value equalled a given fixed threshold. The measurements in a small region of the pixel array and within a given window in time are approximated by a probability distribution defined on a finite set. The distributions obtained in this way form a three dimensional family parameterized by the pixel addresses and by time. Each parameter value has an associated Fisher–Rao matrix obtained from the Fisher–Rao metric for the parameterized family of distributions. The optical flow vector at a given pixel and at a given time is obtained from the eigenvector of the associated Fisher–Rao matrix with the least eigenvalue. The Fisher–Rao algorithm for estimating optical flow is tested on eight datasets, of which six have ground truth optical flow. It is shown that the Fisher–Rao algorithm performs well in comparison with two state of the art algorithms for estimating optical flow from AER measurements.


2021 ◽  
Author(s):  
Yuanze Wang ◽  
Chenlu Liu ◽  
Sheng Li ◽  
Tong Wang ◽  
Weiyang Lin ◽  
...  

2021 ◽  
Author(s):  
Xueyan Huang ◽  
Yueyi Zhang ◽  
Zhiwei Xiong

2021 ◽  
Author(s):  
Kun Huang ◽  
Yifu Wang ◽  
Laurent Kneip

Sensor Review ◽  
2021 ◽  
Vol 41 (4) ◽  
pp. 382-389
Author(s):  
Laura Duarte ◽  
Mohammad Safeea ◽  
Pedro Neto

Purpose This paper proposes a novel method for human hands tracking using data from an event camera. The event camera detects changes in brightness, measuring motion, with low latency, no motion blur, low power consumption and high dynamic range. Captured frames are analysed using lightweight algorithms reporting three-dimensional (3D) hand position data. The chosen pick-and-place scenario serves as an example input for collaborative human–robot interactions and in obstacle avoidance for human–robot safety applications. Design/methodology/approach Events data are pre-processed into intensity frames. The regions of interest (ROI) are defined through object edge event activity, reducing noise. ROI features are extracted for use in-depth perception. Findings Event-based tracking of human hand demonstrated feasible, in real time and at a low computational cost. The proposed ROI-finding method reduces noise from intensity images, achieving up to 89% of data reduction in relation to the original, while preserving the features. The depth estimation error in relation to ground truth (measured with wearables), measured using dynamic time warping and using a single event camera, is from 15 to 30 millimetres, depending on the plane it is measured. Originality/value Tracking of human hands in 3 D space using a single event camera data and lightweight algorithms to define ROI features (hands tracking in space).


Author(s):  
Theo Stangebye ◽  
Matthew Carrano ◽  
Scott Koziol ◽  
Eugene Chabot ◽  
John DiCecco

Sign in / Sign up

Export Citation Format

Share Document