scholarly journals Event-Based Pedestrian Detection Using Dynamic Vision Sensors

Electronics ◽  
2021 ◽  
Vol 10 (8) ◽  
pp. 888
Author(s):  
Jixiang Wan ◽  
Ming Xia ◽  
Zunkai Huang ◽  
Li Tian ◽  
Xiaoying Zheng ◽  
...  

Pedestrian detection has attracted great research attention in video surveillance, traffic statistics, and especially in autonomous driving. To date, almost all pedestrian detection solutions are derived from conventional framed-based image sensors with limited reaction speed and high data redundancy. Dynamic vision sensor (DVS), which is inspired by biological retinas, efficiently captures the visual information with sparse, asynchronous events rather than dense, synchronous frames. It can eliminate redundant data transmission and avoid motion blur or data leakage in high-speed imaging applications. However, it is usually impractical to directly apply the event streams to conventional object detection algorithms. For this issue, we first propose a novel event-to-frame conversion method by integrating the inherent characteristics of events more efficiently. Moreover, we design an improved feature extraction network that can reuse intermediate features to further reduce the computational effort. We evaluate the performance of our proposed method on a custom dataset containing multiple real-world pedestrian scenes. The results indicate that our proposed method raised its pedestrian detection accuracy by about 5.6–10.8%, and its detection speed is nearly 20% faster than previously reported methods. Furthermore, it can achieve a processing speed of about 26 FPS and an AP of 87.43% when implanted on a single CPU so that it fully meets the requirement of real-time detection.

2021 ◽  
Author(s):  
Ben J Hardcastle ◽  
Karin Bierig ◽  
Francisco JH Heras ◽  
Daniel A Schwyn ◽  
Kit D Longden ◽  
...  

Gaze stabilization reflexes reduce motion blur and simplify the processing of visual information by keeping the eyes level. These reflexes typically depend on estimates of the rotational motion of the body, head, and eyes, acquired by visual or mechanosensory systems. During rapid movements, there can be insufficient time for sensory feedback systems to estimate rotational motion, and additional mechanisms are required. The solutions to this common problem likely reflect an animal's behavioral repertoire. Here, we examine gaze stabilization in three families of dipteran flies, each with distinctly different flight behaviors. Through frequency response analysis based on tethered-flight experiments, we demonstrate that fast roll oscillations of the body lead to a stable gaze in hoverflies, whereas the reflex breaks down at the same speeds in blowflies and horseflies. Surprisingly, the high-speed gaze stabilization of hoverflies does not require sensory input from the halteres, their low-latency balance organs. Instead, we show how the behavior is explained by a hybrid control system that combines a sensory-driven, active stabilization component mediated by neck muscles, and a passive component which exploits physical properties of the animal's anatomy---the mass and inertia of the head. This solution requires hoverflies to have specializations of the head-neck joint that can be employed during flight. Our comparative study highlights how species-specific control strategies have evolved to support different visually-guided flight behaviors.


2019 ◽  
Vol 47 (3) ◽  
pp. 196-210
Author(s):  
Meghashyam Panyam ◽  
Beshah Ayalew ◽  
Timothy Rhyne ◽  
Steve Cron ◽  
John Adcox

ABSTRACT This article presents a novel experimental technique for measuring in-plane deformations and vibration modes of a rotating nonpneumatic tire subjected to obstacle impacts. The tire was mounted on a modified quarter-car test rig, which was built around one of the drums of a 500-horse power chassis dynamometer at Clemson University's International Center for Automotive Research. A series of experiments were conducted using a high-speed camera to capture the event of the rotating tire coming into contact with a cleat attached to the surface of the drum. The resulting video was processed using a two-dimensional digital image correlation algorithm to obtain in-plane radial and tangential deformation fields of the tire. The dynamic mode decomposition algorithm was implemented on the deformation fields to extract the dominant frequencies that were excited in the tire upon contact with the cleat. It was observed that the deformations and the modal frequencies estimated using this method were within a reasonable range of expected values. In general, the results indicate that the method used in this study can be a useful tool in measuring in-plane deformations of rolling tires without the need for additional sensors and wiring.


Author(s):  
Denys Rozumnyi ◽  
Jan Kotera ◽  
Filip Šroubek ◽  
Jiří Matas

AbstractObjects moving at high speed along complex trajectories often appear in videos, especially videos of sports. Such objects travel a considerable distance during exposure time of a single frame, and therefore, their position in the frame is not well defined. They appear as semi-transparent streaks due to the motion blur and cannot be reliably tracked by general trackers. We propose a novel approach called Tracking by Deblatting based on the observation that motion blur is directly related to the intra-frame trajectory of an object. Blur is estimated by solving two intertwined inverse problems, blind deblurring and image matting, which we call deblatting. By postprocessing, non-causal Tracking by Deblatting estimates continuous, complete, and accurate object trajectories for the whole sequence. Tracked objects are precisely localized with higher temporal resolution than by conventional trackers. Energy minimization by dynamic programming is used to detect abrupt changes of motion, called bounces. High-order polynomials are then fitted to smooth trajectory segments between bounces. The output is a continuous trajectory function that assigns location for every real-valued time stamp from zero to the number of frames. The proposed algorithm was evaluated on a newly created dataset of videos from a high-speed camera using a novel Trajectory-IoU metric that generalizes the traditional Intersection over Union and measures the accuracy of the intra-frame trajectory. The proposed method outperforms the baselines both in recall and trajectory accuracy. Additionally, we show that from the trajectory function precise physical calculations are possible, such as radius, gravity, and sub-frame object velocity. Velocity estimation is compared to the high-speed camera measurements and radars. Results show high performance of the proposed method in terms of Trajectory-IoU, recall, and velocity estimation.


Author(s):  
Xuewu Zhang ◽  
Yansheng Gong ◽  
Chen Qiao ◽  
Wenfeng Jing

AbstractThis article mainly focuses on the most common types of high-speed railways malfunctions in overhead contact systems, namely, unstressed droppers, foreign-body invasions, and pole number-plate malfunctions, to establish a deep-network detection model. By fusing the feature maps of the shallow and deep layers in the pretraining network, global and local features of the malfunction area are combined to enhance the network's ability of identifying small objects. Further, in order to share the fully connected layers of the pretraining network and reduce the complexity of the model, Tucker tensor decomposition is used to extract features from the fused-feature map. The operation greatly reduces training time. Through the detection of images collected on the Lanxin railway line, experiments result show that the proposed multiview Faster R-CNN based on tensor decomposition had lower miss probability and higher detection accuracy for the three types faults. Compared with object-detection methods YOLOv3, SSD, and the original Faster R-CNN, the average miss probability of the improved Faster R-CNN model in this paper is decreased by 37.83%, 51.27%, and 43.79%, respectively, and average detection accuracy is increased by 3.6%, 9.75%, and 5.9%, respectively.


Sign in / Sign up

Export Citation Format

Share Document