Disentangling Deep Network for Reconstructing 3D Object Shapes from Single 2D Images

2021 ◽  
pp. 153-166
Author(s):  
Yang Yang ◽  
Junwei Han ◽  
Dingwen Zhang ◽  
De Cheng
Author(s):  
Wanqing Zhao ◽  
Shaobo Zhang ◽  
Ziyu Guan ◽  
Wei Zhao ◽  
Jinye Peng ◽  
...  
Keyword(s):  

Author(s):  
Junwei Han ◽  
Yang Yang ◽  
Dingwen Zhang ◽  
Dong Huang ◽  
Dong Xu ◽  
...  

1992 ◽  
Vol 25 (8) ◽  
pp. 771-786 ◽  
Author(s):  
D. Daniel Sheu ◽  
Alan H. Bond
Keyword(s):  

Author(s):  
Mohamed Tahoun ◽  
Carlos M. Mateo ◽  
Juan-Antonio Corrales-Ramon ◽  
Omar Tahri ◽  
Youcef Mezouar ◽  
...  

2010 ◽  
Vol 8 (6) ◽  
pp. 514-514
Author(s):  
W. Hayward ◽  
A. Pasqualotto

Author(s):  
T. Peters ◽  
C. Brenner ◽  
M. Song

Abstract. The goal of this paper is to use transfer learning for semi supervised semantic segmentation in 2D images: given a pretrained deep convolutional network (DCNN), our aim is to adapt it to a new camera-sensor system by enforcing predictions to be consistent for the same object in space. This is enabled by projecting 3D object points into multi-view 2D images. Since every 3D object point is usually mapped to a number of 2D images, each of which undergoes a pixelwise classification using the pretrained DCNN, we obtain a number of predictions (labels) for the same object point. This makes it possible to detect and correct outlier predictions. Ultimately, we retrain the DCNN on the corrected dataset in order to adapt the network to the new input data. We demonstrate the effectiveness of our approach on a mobile mapping dataset containing over 10’000 images and more than 1 billion 3D points. Moreover, we manually annotated a subset of the mobile mapping images and show that we were able to rise the mean intersection over union (mIoU) by approximately 10% with Deeplabv3+, using our approach.


1998 ◽  
Author(s):  
Patrick S. P. Wang
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document