Semantic Enabled 3D Object Retrieval

2010 ◽  
Vol 159 ◽  
pp. 128-131
Author(s):  
Jiang Zhou ◽  
Xin Yu Ma

In the case of traditional 3D shape retrieval systems, the objects retrieved are based mainly on the computation of low-level features that are used to detect so-called regions-of-interest. This paper focuses on obtaining the retrieved objects in a machine understandable and intelligent manner. We explore the different kinds of semantic descriptions for retrieval of 3D shapes. Based on ontology technology, we decompose a 3D objects into meaningful parts semi-automatically. Each part can be regarded as a 3D object, and further be semantically annotated according to ontology vocabulary from the Chinese cultural relics. Three kinds of semantic models such as description semantics of domain knowledge, spatial semantics and scenario semantics, are presented for describing semantic annotations from different viewpoints. These annotated semantics can accurately grasp complete semantic descriptions of 3D shapes.

2010 ◽  
Vol 159 ◽  
pp. 124-127
Author(s):  
Jiang Zhou ◽  
Xin Yu Ma

Recently, semantic based 3D object retrieval has been paid more attention to because it focuses on obtaining the retrieved objects in a machine understandable and intelligent manner. In this paper, we propose an approach for semantic based annotation of 3D shapes. To enable semantic based annotation, the method for object segmentation decomposes 3D objects into meaningful parts semi-automatically. Furthermore, each part can be regarded as a 3D object, and further be semantically annotated according to ontology vocabulary from the Chinese cultural relics. Such a segmentation and annotation provide the premise for the future retrieval of 3D shapes.


Author(s):  
Zhengyue Huang ◽  
Zhehui Zhao ◽  
Hengguang Zhou ◽  
Xibin Zhao ◽  
Yue Gao

3D object retrieval has a compelling demand in the field of computer vision with the rapid development of 3D vision technology and increasing applications of 3D objects. 3D objects can be described in different ways such as voxel, point cloud, and multi-view. Among them, multi-view based approaches proposed in recent years show promising results. Most of them require a fixed predefined camera position setting which provides a complete and uniform sampling of views for objects in the training stage. However, this causes heavy over-fitting problems which make the models failed to generalize well in free camera setting applications, particularly when insufficient views are provided. Experiments show the performance drastically drops when the number of views reduces, hindering these methods from practical applications. In this paper, we investigate the over-fitting issue and remove the constraint of the camera setting. First, two basic feature augmentation strategies Dropout and Dropview are introduced to solve the over-fitting issue, and a more precise and more efficient method named DropMax is proposed after analyzing the drawback of the basic ones. Then, by reducing the over-fitting issue, a camera constraint-free multi-view convolutional neural network named DeepCCFV is constructed. Extensive experiments on both single-modal and cross-modal cases demonstrate the effectiveness of the proposed method in free camera settings comparing with existing state-of-theart 3D object retrieval methods.


2005 ◽  
Vol 41 (4) ◽  
pp. 179 ◽  
Author(s):  
J.-L. Shih ◽  
C.-H. Lee ◽  
J.T. Wang

2014 ◽  
Vol 21 (3) ◽  
pp. 52-57 ◽  
Author(s):  
Yue Gao ◽  
Qionghai Dai

2015 ◽  
Vol 76 (3) ◽  
pp. 4091-4104 ◽  
Author(s):  
Weizhi Nie ◽  
Xixi Li ◽  
Anan Liu ◽  
Yuting Su

Author(s):  
Ilyass Ouazzani Taybi ◽  
Rachid Alaoui ◽  
Fatima Rafii Zakani ◽  
Khadija Arhid ◽  
Mohcine Bouksim ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document