scholarly journals Detail-Preserving Shape Unfolding

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1187
Author(s):  
Bin Liu ◽  
Weiming Wang ◽  
Jun Zhou ◽  
Bo Li ◽  
Xiuping Liu

Canonical extrinsic representations for non-rigid shapes with different poses are preferable in many computer graphics applications, such as shape correspondence and retrieval. The main reason for this is that they give a pose invariant signature for those jobs, which significantly decreases the difficulty caused by various poses. Existing methods based on multidimentional scaling (MDS) always result in significant geometric distortions. In this paper, we present a novel shape unfolding algorithm, which deforms any given 3D shape into a canonical pose that is invariant to non-rigid transformations. The proposed method can effectively preserve the local structure of a given 3D model with the regularization of local rigid transform energy based on the shape deformation technique, and largely reduce geometric distortion. Our algorithm is quite simple and only needs to solve two linear systems during alternate iteration processes. The computational efficiency of our method can be improved with parallel computation and the robustness is guaranteed with a cascade strategy. Experimental results demonstrate the enhanced efficacy of our algorithm compared with the state-of-the-art methods on 3D shape unfolding.

Author(s):  
Yutong Feng ◽  
Yifan Feng ◽  
Haoxuan You ◽  
Xibin Zhao ◽  
Yue Gao

Mesh is an important and powerful type of data for 3D shapes and widely studied in the field of computer vision and computer graphics. Regarding the task of 3D shape representation, there have been extensive research efforts concentrating on how to represent 3D shapes well using volumetric grid, multi-view and point cloud. However, there is little effort on using mesh data in recent years, due to the complexity and irregularity of mesh data. In this paper, we propose a mesh neural network, named MeshNet, to learn 3D shape representation from mesh data. In this method, face-unit and feature splitting are introduced, and a general architecture with available and effective blocks are proposed. In this way, MeshNet is able to solve the complexity and irregularity problem of mesh and conduct 3D shape representation well. We have applied the proposed MeshNet method in the applications of 3D shape classification and retrieval. Experimental results and comparisons with the state-of-the-art methods demonstrate that the proposed MeshNet can achieve satisfying 3D shape classification and retrieval performance, which indicates the effectiveness of the proposed method on 3D shape representation.


Author(s):  
Jianwen Jiang ◽  
Di Bao ◽  
Ziqiang Chen ◽  
Xibin Zhao ◽  
Yue Gao

3D shape retrieval has attracted much attention and become a hot topic in computer vision field recently.With the development of deep learning, 3D shape retrieval has also made great progress and many view-based methods have been introduced in recent years. However, how to represent 3D shapes better is still a challenging problem. At the same time, the intrinsic hierarchical associations among views still have not been well utilized. In order to tackle these problems, in this paper, we propose a multi-loop-view convolutional neural network (MLVCNN) framework for 3D shape retrieval. In this method, multiple groups of views are extracted from different loop directions first. Given these multiple loop views, the proposed MLVCNN framework introduces a hierarchical view-loop-shape architecture, i.e., the view level, the loop level, and the shape level, to conduct 3D shape representation from different scales. In the view-level, a convolutional neural network is first trained to extract view features. Then, the proposed Loop Normalization and LSTM are utilized for each loop of view to generate the loop-level features, which considering the intrinsic associations of the different views in the same loop. Finally, all the loop-level descriptors are combined into a shape-level descriptor for 3D shape representation, which is used for 3D shape retrieval. Our proposed method has been evaluated on the public 3D shape benchmark, i.e., ModelNet40. Experiments and comparisons with the state-of-the-art methods show that the proposed MLVCNN method can achieve significant performance improvement on 3D shape retrieval tasks. Our MLVCNN outperforms the state-of-the-art methods by the mAP of 4.84% in 3D shape retrieval task. We have also evaluated the performance of the proposed method on the 3D shape classification task where MLVCNN also achieves superior performance compared with recent methods.


2014 ◽  
Vol 1049-1050 ◽  
pp. 1417-1420
Author(s):  
Hui Jia ◽  
Guo Hua Geng ◽  
Jian Gang Zhang

3D model segmentation is a new research focus in the field of computer graphics. The segmentation algorithm of this paper is consistent segmentation which is about a group of 3D model with shape similarity. A volume-based shape-function called the shape diameter function (SDF) is used to on behalf of the characteristics of the model. Gaussian mixture model (GMM) is fitting k Gaussians to the SDF values, and EM algorithm is used to segment 3D models consistently. The experimental results show that this algorithm can effectively segment the 3D models consistently.


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Hui Zeng ◽  
Xiuqing Wang ◽  
Yu Gu

This paper presents an effective local image region description method, called CS-LMP (Center Symmetric Local Multilevel Pattern) descriptor, and its application in image matching. The CS-LMP operator has no exponential computations, so the CS-LMP descriptor can encode the differences of the local intensity values using multiply quantization levels without increasing the dimension of the descriptor. Compared with the binary/ternary pattern based descriptors, the CS-LMP descriptor has better descriptive ability and computational efficiency. Extensive image matching experimental results testified the effectiveness of the proposed CS-LMP descriptor compared with other existing state-of-the-art descriptors.


Sign in / Sign up

Export Citation Format

Share Document