Composition Based Semantic Scene Retrieval for Ancient Murals

Author(s):  
Qi Wang ◽  
Dongming Lu ◽  
Hongxin Zhang
Keyword(s):  
Author(s):  
Aloisio Dourado ◽  
Teofilo E. De Campos ◽  
Hansung Kim ◽  
Adrian Hilton
Keyword(s):  

2012 ◽  
Vol 116 (3) ◽  
pp. 446-456 ◽  
Author(s):  
Hannah M. Dee ◽  
Anthony G. Cohn ◽  
David C. Hogg
Keyword(s):  

2021 ◽  
Author(s):  
Hao Zou ◽  
Xuemeng Yang ◽  
Tianxin Huang ◽  
Chujuan Zhang ◽  
Yong Liu ◽  
...  
Keyword(s):  

2010 ◽  
Vol 10 (1) ◽  
pp. 98-105 ◽  
Author(s):  
Songhao Zhu ◽  
Zhiwei Liang

2020 ◽  
pp. 1-13
Author(s):  
Fei Wang ◽  
Yan Zhuang ◽  
Hong Zhang ◽  
Hong Gu

2020 ◽  
Vol 34 (07) ◽  
pp. 11402-11409
Author(s):  
Siqi Li ◽  
Changqing Zou ◽  
Yipeng Li ◽  
Xibin Zhao ◽  
Yue Gao

This paper presents an end-to-end 3D convolutional network named attention-based multi-modal fusion network (AMFNet) for the semantic scene completion (SSC) task of inferring the occupancy and semantic labels of a volumetric 3D scene from single-view RGB-D images. Compared with previous methods which use only the semantic features extracted from RGB-D images, the proposed AMFNet learns to perform effective 3D scene completion and semantic segmentation simultaneously via leveraging the experience of inferring 2D semantic segmentation from RGB-D images as well as the reliable depth cues in spatial dimension. It is achieved by employing a multi-modal fusion architecture boosted from 2D semantic segmentation and a 3D semantic completion network empowered by residual attention blocks. We validate our method on both the synthetic SUNCG-RGBD dataset and the real NYUv2 dataset and the results show that our method respectively achieves the gains of 2.5% and 2.6% on the synthetic SUNCG-RGBD dataset and the real NYUv2 dataset against the state-of-the-art method.


Sign in / Sign up

Export Citation Format

Share Document