Growing Self-Organizing Surface Map: Learning a Surface Topology from a Point Cloud

2010 ◽  
Vol 22 (3) ◽  
pp. 689-729 ◽  
Author(s):  
Vilson Luiz Dalle Mole ◽  
Aluizio Fausto Ribeiro Araújo

The growing self-organizing surface map (GSOSM) is a novel map model that learns a folded surface immersed in a 3D space. Starting from a dense point cloud, the surface is reconstructed through an incremental mesh composed of approximately equilateral triangles. Unlike other models such as neural meshes (NM), the GSOSM builds a surface topology while accepting any sequence of sample presentation. The GSOSM model introduces a novel connection learning rule called competitive connection Hebbian learning (CCHL), which produces a complete triangulation. GSOSM reconstructions are accurate and often free of false or overlapping faces. This letter presents and discusses the GSOSM model. It also presents and analyzes a set of results and compares GSOSM with some other models.

2014 ◽  
Vol 536-537 ◽  
pp. 213-217
Author(s):  
Meng Qiang Zhu ◽  
Jie Yang

This paper takes the following measures to solve the problem of 3D reconstruction. Camera calibration is based on chessboard, taking several different attitude images. Use corner point coordinates by corner detection to process camera calibration. The calibration result is important to be used to correct the distorted image. Next, the left and right images should be matched to find out the object surface points’ imaging position respectively so that the object depth can be calculated by triangulation. According to the inverse process of projection mapping, we can project the object depth and disparity information into 3D space. As a result, we can obtain dense point cloud, which is ready for 3D reconstruction.


2004 ◽  
Vol 16 (3) ◽  
pp. 535-561 ◽  
Author(s):  
Reiner Schulz ◽  
James A. Reggia

We examine the extent to which modified Kohonen self-organizing maps (SOMs) can learn unique representations of temporal sequences while still supporting map formation. Two biologically inspired extensions are made to traditional SOMs: selection of multiple simultaneous rather than single “winners” and the use of local intramap connections that are trained according to a temporally asymmetric Hebbian learning rule. The extended SOM is then trained with variable-length temporal sequences that are composed of phoneme feature vectors, with each sequence corresponding to the phonetic transcription of a noun. The model transforms each input sequence into a spatial representation (final activation pattern on the map). Training improves this transformation by, for example, increasing the uniqueness of the spatial representations of distinct sequences, while still retaining map formation based on input patterns. The closeness of the spatial representations of two sequences is found to correlate significantly with the sequences' similarity. The extended model presented here raises the possibility that SOMs may ultimately prove useful as visualization tools for temporal sequences and as preprocessors for sequence pattern recognition systems.


Author(s):  
Louis Wiesmann ◽  
Andres Milioto ◽  
Xieyuanli Chen ◽  
Cyrill Stachniss ◽  
Jens Behley
Keyword(s):  

1995 ◽  
Vol 34 (35) ◽  
pp. 8167 ◽  
Author(s):  
K. Heggarty ◽  
J. Duvillier ◽  
E. Carpio Pérez ◽  
J. L. de Bougrenet de la Tocnaye

2021 ◽  
pp. 107057
Author(s):  
Ping Wang ◽  
Li Liu ◽  
Huaxiang Zhang ◽  
Tianshi Wang
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document