Human Tracking in Top-view Fisheye Images with Color Histograms via Deep Learning Detection

Author(s):  
Olfa Haggui ◽  
Marina Vert ◽  
Kieran McNamara ◽  
Bastien Brieussel ◽  
Baptiste Magnier
Author(s):  
Jacopo Pegoraro ◽  
Domenico Solimini ◽  
Federico Matteo ◽  
Enver Bashirov ◽  
Francesca Meneghello ◽  
...  

2021 ◽  
Vol 16 (4) ◽  
pp. 336-344
Author(s):  
Kyungseok Oh ◽  
Sunghyun Kim ◽  
Jinseop Kim ◽  
Seunghwan Lee

2020 ◽  
Vol 10 (13) ◽  
pp. 4486 ◽  
Author(s):  
Yongbeom Lee ◽  
Seongkeun Park

In this paper, we propose a deep learning-based perception method in autonomous driving systems using a Light Detection and Ranging(LiDAR) point cloud data, which is called a simultaneous segmentation and detection network (SSADNet). SSADNet can be used to recognize both drivable areas and obstacles, which is necessary for autonomous driving. Unlike the previous methods, where separate networks were needed for segmentation and detection, SSADNet can perform segmentation and detection simultaneously based on a single neural network. The proposed method uses point cloud data obtained from a 3D LiDAR for network input to generate a top view image consisting of three channels of distance, height, and reflection intensity. The structure of the proposed network includes a branch for segmentation and a branch for detection as well as a bridge connecting the two parts. The KITTI dataset, which is often used for experiments on autonomous driving, was used for training. The experimental results show that segmentation and detection can be performed simultaneously for drivable areas and vehicles at a quick inference speed, which is appropriate for autonomous driving systems.


2014 ◽  
Vol 11 (4) ◽  
pp. 769-784 ◽  
Author(s):  
Cyrille Migniot ◽  
Fakhreddine Ababsa
Keyword(s):  

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 136361-136373
Author(s):  
Imran Ahmed ◽  
Misbah Ahmad ◽  
Fakhri Alam Khan ◽  
Muhammad Asif

2020 ◽  
Vol 34 (07) ◽  
pp. 10917-10924 ◽  
Author(s):  
Ruize Han ◽  
Wei Feng ◽  
Jiewen Zhao ◽  
Zicheng Niu ◽  
Yujun Zhang ◽  
...  

The global trajectories of targets on ground can be well captured from a top view in a high altitude, e.g., by a drone-mounted camera, while their local detailed appearances can be better recorded from horizontal views, e.g., by a helmet camera worn by a person. This paper studies a new problem of multiple human tracking from a pair of top- and horizontal-view videos taken at the same time. Our goal is to track the humans in both views and identify the same person across the two complementary views frame by frame, which is very challenging due to very large field of view difference. In this paper, we model the data similarity in each view using appearance and motion reasoning and across views using appearance and spatial reasoning. Combing them, we formulate the proposed multiple human tracking as a joint optimization problem, which can be solved by constrained integer programming. We collect a new dataset consisting of top- and horizontal-view video pairs for performance evaluation and the experimental results show the effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document