Bounding Boxes, Segmentations and Object Coordinates: How Important is Recognition for 3D Scene Flow Estimation in Autonomous Driving Scenarios?

Author(s):  
Aseem Behl ◽  
Omid Hosseini Jafari ◽  
Siva Karthik Mustikovela ◽  
Hassan Abu Alhaija ◽  
Carsten Rother ◽  
...  
2015 ◽  
Vol 115 (1) ◽  
pp. 1-28 ◽  
Author(s):  
Christoph Vogel ◽  
Konrad Schindler ◽  
Stefan Roth
Keyword(s):  

Author(s):  
Guangming Wang ◽  
Chaokang Jiang ◽  
Zehang Shen ◽  
Yanzi Miao ◽  
Hesheng Wang

3D scene flow presents the 3D motion of each point in the 3D space, which forms the fundamental 3D motion perception for autonomous driving and server robots. Although the RGBD camera or LiDAR capture discrete 3D points in space, the objects and motions usually are continuous in the macro world. That is, the objects keep themselves consistent as they flow from the current frame to the next frame. Based on this insight, the Generative Adversarial Networks (GAN) is utilized to self-learn 3D scene flow with no need for ground truth. The fake point cloud of the second frame is synthesized from the predicted scene flow and the point cloud of the first frame. The adversarial training of the generator and discriminator is realized through synthesizing indistinguishable fake point cloud and discriminating the real point cloud and the synthesized fake point cloud. The experiments on KITTI scene flow dataset show that our method realizes promising results without ground truth. Just like a human observing a real-world scene, the proposed approach is capable of determining the consistency of the scene at different moments in spite of the exact flow value of each point is unknown in advance. Corresponding author(s) Email: [email protected]


Author(s):  
M. Menze ◽  
C. Heipke ◽  
A. Geiger

driving. While much progress has been made in recent years, imaging conditions in natural outdoor environments are still very challenging for current reconstruction and recognition methods. In this paper, we propose a novel unified approach which reasons jointly about 3D scene flow as well as the pose, shape and motion of vehicles in the scene. Towards this goal, we incorporate a deformable CAD model into a slanted-plane conditional random field for scene flow estimation and enforce shape consistency between the rendered 3D models and the parameters of all superpixels in the image. The association of superpixels to objects is established by an index variable which implicitly enables model selection. We evaluate our approach on the challenging KITTI scene flow dataset in terms of object and scene flow estimation. Our results provide a prove of concept and demonstrate the usefulness of our method.


Author(s):  
Jaesik Park ◽  
Tae Hyun Oh ◽  
Jiyoung Jung ◽  
Yu-Wing Tai ◽  
In So Kweon

2018 ◽  
Vol 2018 (18) ◽  
pp. 426-1-426-6
Author(s):  
Hiroki Usami ◽  
Hideo Saito ◽  
Jun Kawai ◽  
Noriko Itani

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 22745-22759
Author(s):  
Cheng Feng ◽  
Long Ma ◽  
Congxuan Zhang ◽  
Zhen Chen ◽  
Liyue Ge ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document