Event-based tracking of human hands

Sensor Review ◽  
2021 ◽  
Vol 41 (4) ◽  
pp. 382-389
Author(s):  
Laura Duarte ◽  
Mohammad Safeea ◽  
Pedro Neto

Purpose This paper proposes a novel method for human hands tracking using data from an event camera. The event camera detects changes in brightness, measuring motion, with low latency, no motion blur, low power consumption and high dynamic range. Captured frames are analysed using lightweight algorithms reporting three-dimensional (3D) hand position data. The chosen pick-and-place scenario serves as an example input for collaborative human–robot interactions and in obstacle avoidance for human–robot safety applications. Design/methodology/approach Events data are pre-processed into intensity frames. The regions of interest (ROI) are defined through object edge event activity, reducing noise. ROI features are extracted for use in-depth perception. Findings Event-based tracking of human hand demonstrated feasible, in real time and at a low computational cost. The proposed ROI-finding method reduces noise from intensity images, achieving up to 89% of data reduction in relation to the original, while preserving the features. The depth estimation error in relation to ground truth (measured with wearables), measured using dynamic time warping and using a single event camera, is from 15 to 30 millimetres, depending on the plane it is measured. Originality/value Tracking of human hands in 3 D space using a single event camera data and lightweight algorithms to define ROI features (hands tracking in space).

2021 ◽  
Author(s):  
Shixiong Zhang ◽  
Wenmin Wang

<div>Event-based vision is a novel bio-inspired vision that has attracted the interest of many researchers. As a neuromorphic vision, the sensor is different from the traditional frame-based cameras. It has such advantages that conventional frame-based cameras can’t match, e.g., high temporal resolution, high dynamic range(HDR), sparse and minimal motion blur. Recently, a lot of computer vision approaches have been proposed with demonstrated success. However, there is a lack of some general methods to expand the scope of the application of event-based vision. To be able to effectively bridge the gap between conventional computer vision and event-based vision, in this paper, we propose an adaptable framework for object detection in event-based vision.</div>


2021 ◽  
Vol 15 ◽  
Author(s):  
Lakshmi Annamalai ◽  
Anirban Chakraborty ◽  
Chetan Singh Thakur

Event-based cameras are bio-inspired novel sensors that asynchronously record changes in illumination in the form of events. This principle results in significant advantages over conventional cameras, such as low power utilization, high dynamic range, and no motion blur. Moreover, by design, such cameras encode only the relative motion between the scene and the sensor and not the static background to yield a very sparse data structure. In this paper, we leverage these advantages of an event camera toward a critical vision application—video anomaly detection. We propose an anomaly detection solution in the event domain with a conditional Generative Adversarial Network (cGAN) made up of sparse submanifold convolution layers. Video analytics tasks such as anomaly detection depend on the motion history at each pixel. To enable this, we also put forward a generic unsupervised deep learning solution to learn a novel memory surface known as Deep Learning (DL) memory surface. DL memory surface encodes the temporal information readily available from these sensors while retaining the sparsity of event data. Since there is no existing dataset for anomaly detection in the event domain, we also provide an anomaly detection event dataset with a set of anomalies. We empirically validate our anomaly detection architecture, composed of sparse convolutional layers, on this proposed and online dataset. Careful analysis of the anomaly detection network reveals that the presented method results in a massive reduction in computational complexity with good performance compared to previous state-of-the-art conventional frame-based anomaly detection networks.


Author(s):  
Xiaoqian Huang ◽  
Mohamad Halwani ◽  
Rajkumar Muthusamy ◽  
Abdulla Ayyad ◽  
Dewald Swart ◽  
...  

AbstractRobotic vision plays a key role for perceiving the environment in grasping applications. However, the conventional framed-based robotic vision, suffering from motion blur and low sampling rate, may not meet the automation needs of evolving industrial requirements. This paper, for the first time, proposes an event-based robotic grasping framework for multiple known and unknown objects in a cluttered scene. With advantages of microsecond-level sampling rate and no motion blur of event camera, the model-based and model-free approaches are developed for known and unknown objects’ grasping respectively. The event-based multi-view approach is used to localize the objects in the scene in the model-based approach, and then point cloud processing is utilized to cluster and register the objects. The proposed model-free approach, on the other hand, utilizes the developed event-based object segmentation, visual servoing and grasp planning to localize, align to, and grasp the targeting object. Using a UR10 robot with an eye-in-hand neuromorphic camera and a Barrett hand gripper, the proposed approaches are experimentally validated with objects of different sizes. Furthermore, it demonstrates robustness and a significant advantage over grasping with a traditional frame-based camera in low-light conditions.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1475 ◽  
Author(s):  
Jingyun Duo ◽  
Long Zhao

Event cameras have many advantages over conventional frame-based cameras, such as high temporal resolution, low latency and high dynamic range. However, state-of-the-art event- based algorithms either require too much computation time or have poor accuracy performance. In this paper, we propose an asynchronous real-time corner extraction and tracking algorithm for an event camera. Our primary motivation focuses on enhancing the accuracy of corner detection and tracking while ensuring computational efficiency. Firstly, according to the polarities of the events, a simple yet effective filter is applied to construct two restrictive Surface of Active Events (SAEs), named as RSAE+ and RSAE−, which can accurately represent high contrast patterns; meanwhile it filters noises and redundant events. Afterwards, a new coarse-to-fine corner extractor is proposed to extract corner events efficiently and accurately. Finally, a space, time and velocity direction constrained data association method is presented to realize corner event tracking, and we associate a new arriving corner event with the latest active corner that satisfies the velocity direction constraint in its neighborhood. The experiments are run on a standard event camera dataset, and the experimental results indicate that our method achieves excellent corner detection and tracking performance. Moreover, the proposed method can process more than 4.5 million events per second, showing promising potential in real-time computer vision applications.


2017 ◽  
Vol 36 (2) ◽  
pp. 142-149 ◽  
Author(s):  
Elias Mueggler ◽  
Henri Rebecq ◽  
Guillermo Gallego ◽  
Tobi Delbruck ◽  
Davide Scaramuzza

New vision sensors, such as the dynamic and active-pixel vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array. These sensors have great potential for high-speed robotics and computer vision because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and very high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called “events”) and synchronous grayscale frames. For this purpose, we present and release a collection of datasets captured with a DAVIS in a variety of synthetic and real environments, which we hope will motivate research on new algorithms for high-speed and high-dynamic-range robotics and computer-vision applications. In addition to global-shutter intensity images and asynchronous events, we provide inertial measurements and ground-truth camera poses from a motion-capture system. The latter allows comparing the pose accuracy of ego-motion estimation algorithms quantitatively. All the data are released both as standard text files and binary files (i.e. rosbag). This paper provides an overview of the available data and describes a simulator that we release open-source to create synthetic event-camera data.


2021 ◽  
Author(s):  
Shixiong Zhang ◽  
Wenmin Wang

<div>Event-based vision is a novel bio-inspired vision that has attracted the interest of many researchers. As a neuromorphic vision, the sensor is different from the traditional frame-based cameras. It has such advantages that conventional frame-based cameras can’t match, e.g., high temporal resolution, high dynamic range(HDR), sparse and minimal motion blur. Recently, a lot of computer vision approaches have been proposed with demonstrated success. However, there is a lack of some general methods to expand the scope of the application of event-based vision. To be able to effectively bridge the gap between conventional computer vision and event-based vision, in this paper, we propose an adaptable framework for object detection in event-based vision.</div>


2021 ◽  
Vol 11 (2) ◽  
pp. 23
Author(s):  
Duy-Anh Nguyen ◽  
Xuan-Tu Tran ◽  
Francesca Iacopi

Deep Learning (DL) has contributed to the success of many applications in recent years. The applications range from simple ones such as recognizing tiny images or simple speech patterns to ones with a high level of complexity such as playing the game of Go. However, this superior performance comes at a high computational cost, which made porting DL applications to conventional hardware platforms a challenging task. Many approaches have been investigated, and Spiking Neural Network (SNN) is one of the promising candidates. SNN is the third generation of Artificial Neural Networks (ANNs), where each neuron in the network uses discrete spikes to communicate in an event-based manner. SNNs have the potential advantage of achieving better energy efficiency than their ANN counterparts. While generally there will be a loss of accuracy on SNN models, new algorithms have helped to close the accuracy gap. For hardware implementations, SNNs have attracted much attention in the neuromorphic hardware research community. In this work, we review the basic background of SNNs, the current state and challenges of the training algorithms for SNNs and the current implementations of SNNs on various hardware platforms.


2017 ◽  
Vol 83 (9) ◽  
Author(s):  
Adam Jordan ◽  
Jenna Chandler ◽  
Joshua S. MacCready ◽  
Jingcheng Huang ◽  
Katherine W. Osteryoung ◽  
...  

ABSTRACT Cyanobacteria are emerging as alternative crop species for the production of fuels, chemicals, and biomass. Yet, the success of these microbes depends on the development of cost-effective technologies that permit scaled cultivation and cell harvesting. Here, we investigate the feasibility of engineering cell morphology to improve biomass recovery and decrease energetic costs associated with lysing cyanobacterial cells. Specifically, we modify the levels of Min system proteins in Synechococcus elongatus PCC 7942. The Min system has established functions in controlling cell division by regulating the assembly of FtsZ, a tubulin-like protein required for defining the bacterial division plane. We show that altering the expression of two FtsZ-regulatory proteins, MinC and Cdv3, enables control over cell morphology by disrupting FtsZ localization and cell division without preventing continued cell growth. By varying the expression of these proteins, we can tune the lengths of cyanobacterial cells across a broad dynamic range, anywhere from an ∼20% increased length (relative to the wild type) to near-millimeter lengths. Highly elongated cells exhibit increased rates of sedimentation under low centrifugal forces or by gravity-assisted settling. Furthermore, hyperelongated cells are also more susceptible to lysis through the application of mild physical stress. Collectively, these results demonstrate a novel approach toward decreasing harvesting and processing costs associated with mass cyanobacterial cultivation by altering morphology at the cellular level. IMPORTANCE We show that the cell length of a model cyanobacterial species can be programmed by rationally manipulating the expression of protein factors that suppress cell division. In some instances, we can increase the size of these cells to near-millimeter lengths with this approach. The resulting elongated cells have favorable properties with regard to cell harvesting and lysis. Furthermore, cells treated in this manner continue to grow rapidly at time scales similar to those of uninduced controls. To our knowledge, this is the first reported example of engineering the cell morphology of cyanobacteria or algae to make them more compatible with downstream processing steps that present economic barriers to their use as alternative crop species. Therefore, our results are a promising proof-of-principle for the use of morphology engineering to increase the cost-effectiveness of the mass cultivation of cyanobacteria for various sustainability initiatives.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1137
Author(s):  
Ondřej Holešovský ◽  
Radoslav Škoviera ◽  
Václav Hlaváč ◽  
Roman Vítek

We compare event-cameras with fast (global shutter) frame-cameras experimentally, asking: “What is the application domain, in which an event-camera surpasses a fast frame-camera?” Surprisingly, finding the answer has been difficult. Our methodology was to test event- and frame-cameras on generic computer vision tasks where event-camera advantages should manifest. We used two methods: (1) a controlled, cheap, and easily reproducible experiment (observing a marker on a rotating disk at varying speeds); (2) selecting one challenging practical ballistic experiment (observing a flying bullet having a ground truth provided by an ultra-high-speed expensive frame-camera). The experimental results include sampling/detection rates and position estimation errors as functions of illuminance and motion speed; and the minimum pixel latency of two commercial state-of-the-art event-cameras (ATIS, DVS240). Event-cameras respond more slowly to positive than to negative large and sudden contrast changes. They outperformed a frame-camera in bandwidth efficiency in all our experiments. Both camera types provide comparable position estimation accuracy. The better event-camera was limited by pixel latency when tracking small objects, resulting in motion blur effects. Sensor bandwidth limited the event-camera in object recognition. However, future generations of event-cameras might alleviate bandwidth limitations.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Xiao Jiang ◽  
Tat Leung Chan

Purpose The purpose of this study is to investigate the aerosol dynamics of the particle coagulation process using a newly developed weighted fraction Monte Carlo (WFMC) method. Design/methodology/approach The weighted numerical particles are adopted in a similar manner to the multi-Monte Carlo (MMC) method, with the addition of a new fraction function (α). Probabilistic removal is also introduced to maintain a constant number scheme. Findings Three typical cases with constant kernel, free-molecular coagulation kernel and different initial distributions for particle coagulation are simulated and validated. The results show an excellent agreement between the Monte Carlo (MC) method and the corresponding analytical solutions or sectional method results. Further numerical results show that the critical stochastic error in the newly proposed WFMC method is significantly reduced when compared with the traditional MMC method for higher-order moments with only a slight increase in computational cost. The particle size distribution is also found to extend for the larger size regime with the WFMC method, which is traditionally insufficient in the classical direct simulation MC and MMC methods. The effects of different fraction functions on the weight function are also investigated. Originality Value Stochastic error is inevitable in MC simulations of aerosol dynamics. To minimize this critical stochastic error, many algorithms, such as MMC method, have been proposed. However, the weight of the numerical particles is not adjustable. This newly developed algorithm with an adjustable weight of the numerical particles can provide improved stochastic error reduction.


Sign in / Sign up

Export Citation Format

Share Document