Radar Object Detection Using Data Merging, Enhancement and Fusion

2021 ◽  
Author(s):  
Jun Yu ◽  
Xinlong Hao ◽  
Xinjian Gao ◽  
Qiang Sun ◽  
Yuyu Liu ◽  
...  
2016 ◽  
Vol 138 (6) ◽  
pp. 6-8 ◽  
Author(s):  
G.V. Garje ◽  
S.V. Khaladkar ◽  
A.N. Khengare ◽  
J.M. Pawar ◽  
M.S. Vidhate

2015 ◽  
Vol 23 (3) ◽  
pp. 576-592 ◽  
Author(s):  
Daan Van Britsom ◽  
Antoon Bronselaer ◽  
Guy De Tre

Author(s):  
B. Borgmann ◽  
M. Hebel ◽  
M. Arens ◽  
U. Stilla

The focus of this paper is the processing of data from multiple LiDAR (light detection and ranging) sensors for the purpose of detecting persons in that data. Many LiDAR sensors (e.g., laser scanners) use a rotating scan head, which makes it difficult to properly timesynchronize multiple of such LiDAR sensors. An improper synchronization between LiDAR sensors causes temporal distortion effects if their data are directly merged. A merging of data is desired, since it could increase the data density and the perceived area. For the usage in person and object detection tasks, we present an alternative which circumvents the problem by performing the merging of multi-sensor data in the voting space of a method that is based on Implicit Shape Models (ISM). Our approach already assumes that there exist some uncertainties in the voting space. Therefore it is robust against additional uncertainties induced by temporal distortions. Unlike many existing approaches for object detection in 3D data, our approach does not rely on a segmentation step in the data preprocessing. We show that our merging of multi-sensor information in voting space has its advantages in comparison to a direct data merging, especially in situations with a lot of distortion effects.


2017 ◽  
Vol 2017 (10) ◽  
pp. 27-36 ◽  
Author(s):  
Daniel Mas Montserrat ◽  
Qian Lin ◽  
Jan Allebach ◽  
EdwardJ. Delp

2021 ◽  
Vol 1827 (1) ◽  
pp. 012178
Author(s):  
Zhang Ruiqiang ◽  
Liu Jiajia ◽  
Zeng Yu ◽  
Pan Kejia ◽  
Huang Lin

Author(s):  
Gerardo Mendoza-Azpur ◽  
Heydi Cornejo ◽  
Milton Villanueva ◽  
Renato Alva ◽  
André Barbisan de Souza

Author(s):  
Jin-Seob Lee ◽  
Ji-Wook Kwon ◽  
Dongkyoung Chwa ◽  
Suk-Kyo Hong

2018 ◽  
Vol 24 (7) ◽  
pp. 568-580 ◽  
Author(s):  
Jun Yang

Vision-based action recognition of construction workers has attracted increasing attention for its diverse applications. Though state-of-the-art performances have been achieved using spatial-temporal features in previous studies, considerable challenges remain in the context of cluttered and dynamic construction sites. Considering that workers actions are closely related to various construction entities, this paper proposes a novel system on enhancing action recognition using semantic information. A data-driven scene parsing method, named label transfer, is adopted to recognize construction entities in the entire scene. A probabilistic model of actions with context is established. Worker actions are first classified using dense trajectories, and then improved by construction object recognition. The experimental results on a comprehensive dataset show that the proposed system outperforms the baseline algorithm by 10.5%. The paper provides a new solution to integrate semantic information globally, other than conventional object detection, which can only depict local context. The proposed system is especially suitable for construction sites, where semantic information is rich from local objects to global surroundings. As compared to other methods using object detection to integrate context information, it is easy to implement, requiring no tedious training or parameter tuning, and is scalable to the number of recognizable objects.


Sign in / Sign up

Export Citation Format

Share Document