scholarly journals EXPLORATORY STUDY OF 3D POINT CLOUD TRIANGULATION FOR SMART CITY MODELLING AND VISUALIZATION

Author(s):  
S. A. M. Ariff ◽  
S. Azri ◽  
U. Ujang ◽  
A. A. M. Nasir ◽  
N. Ahmad Fuad ◽  
...  

Abstract. The current trends of 3D scanning technologies allow us to acquire accurate 3D data of large-scale environment efficiently. The 3D data of large-scale environments is essential when generating 3D model is for the visualization of smart cities. For the seamless visualization of 3D model, large data size will be used during the 3D data acquisition. However, the processing time for large data size is time consuming and requires suitable hardware specification. In this study, different hardware capability in processing large data of 3D point cloud for mesh generation is investigated. Light Detection and Ranging (LiDAR) Airborne and Mobile Mapping System (MMS) are used as data input and processed using Bentley ContextCapture software. The study is conducted in Malaysia, specifically in Wilayah Persekutuan Kuala Lumpur and Selangor with the size of 49 km2. Several analyses have been performed to analyse the software and hardware specification based on the 3D mesh model generated. From the finding, we have suggested the most suitable hardware specification for 3D mesh model generation.

Author(s):  
A. Symeonidis ◽  
A. Koutsoudis ◽  
G. Ioannakis ◽  
C. Chamzas

3D digitisation has been applied in different application domains. Due to the continuous growing interest, commercial and experimental 3D acquisition systems have evolved. Nevertheless, there isn't an all-in-one solution, thus there is a need for combining different technologies in order to exploit the advantages of each approach. In this paper, we present a solution to a specific problem related to the combination of 3D data resulted from a non-colour laser triangulation scanner and a shape-fromsilhouette system. Our approach combines the data of these two 3D acquisition systems in order to produce a hybrid 3D mesh model with the geometric accuracy and detail captured by the laser scanner and the high resolution textural information of the shape-fromsilhouette system. We propose an algorithm that is based on virtual photo shooting and an inverse texture map projection phase. We present an example of our algorithm’s operation on exchanging the texture maps of a replica artefact which has been digitised by both systems.


2020 ◽  
Vol 12 (16) ◽  
pp. 2598
Author(s):  
Simone Teruggi ◽  
Eleonora Grilli ◽  
Michele Russo ◽  
Francesco Fassi ◽  
Fabio Remondino

The recent years saw an extensive use of 3D point cloud data for heritage documentation, valorisation and visualisation. Although rich in metric quality, these 3D data lack structured information such as semantics and hierarchy between parts. In this context, the introduction of point cloud classification methods can play an essential role for better data usage, model definition, analysis and conservation. The paper aims to extend a machine learning (ML) classification method with a multi-level and multi-resolution (MLMR) approach. The proposed MLMR approach improves the learning process and optimises 3D classification results through a hierarchical concept. The MLMR procedure is tested and evaluated on two large-scale and complex datasets: the Pomposa Abbey (Italy) and the Milan Cathedral (Italy). Classification results show the reliability and replicability of the developed method, allowing the identification of the necessary architectural classes at each geometric resolution.


Author(s):  
K. L. Navaneet ◽  
Priyanka Mandikal ◽  
Mayank Agarwal ◽  
R. Venkatesh Babu

Knowledge of 3D properties of objects is a necessity in order to build effective computer vision systems. However, lack of large scale 3D datasets can be a major constraint for datadriven approaches in learning such properties. We consider the task of single image 3D point cloud reconstruction, and aim to utilize multiple foreground masks as our supervisory data to alleviate the need for large scale 3D datasets. A novel differentiable projection module, called ‘CAPNet’, is introduced to obtain such 2D masks from a predicted 3D point cloud. The key idea is to model the projections as a continuous approximation of the points in the point cloud. To overcome the challenges of sparse projection maps, we propose a loss formulation termed ‘affinity loss’ to generate outlierfree reconstructions. We significantly outperform the existing projection based approaches on a large-scale synthetic dataset. We show the utility and generalizability of such a 2D supervised approach through experiments on a real-world dataset, where lack of 3D data can be a serious concern. To further enhance the reconstructions, we also propose a test stage optimization procedure to obtain reconstructions that display high correspondence with the observed input image.


2012 ◽  
Vol 22 (5) ◽  
pp. 744-759 ◽  
Author(s):  
Suk-Hwan Lee ◽  
Ki-Ryong Kwon
Keyword(s):  
3D Mesh ◽  

2021 ◽  
pp. 004051752110138
Author(s):  
Haisang Liu ◽  
Gaoming Jiang ◽  
Zhijia Dong

The purpose of this paper is to geometrically simulate warp-knitted medical tubular bandages with a computer-aided simulator. A flat mesh model is established according to unfolded fabric considering the knitting characteristics of double-needle bed warp-knitted tubular fabrics. Moreover, a 3D (three-dimensional) mesh model corresponding to the actual product shape is created. To better describe the spatial geometry of stitches, eight-point models are introduced, and stitches are generated with the flat mesh model. Founded on matrix operations, the stitch position in the 3D mesh model is determined through coordinate mapping. Various stitch paths are rendered in computer programming languages C# and JavaScript to conduct simulations. Warp-knitted medical tubular bandages with a large number of shapes are effectively modeled.


2004 ◽  
Vol 20 (8) ◽  
pp. 1241-1250
Author(s):  
Deok-Soo Kim ◽  
Youngsong Cho ◽  
Hyun Kim

Author(s):  
H. Huang ◽  
H. Jiang ◽  
C. Brenner ◽  
H. Mayer

We propose a novel method to segment Microsoft™Kinect data of indoor scenes with the emphasis on freeform objects. We use the full 3D information for the scene parsing and the segmentation of potential objects instead of treating the depth values as an additional channel of the 2D image. The raw RGBD image is first converted to a 3D point cloud with color. We then group the points into patches, which are derived from a 2D superpixel segmentation. With the assumption that every patch in the point cloud represents (a part of) the surface of an underlying solid body, a hypothetical quasi-3D model – the "synthetic volume primitive" (SVP) is constructed by extending the patch with a synthetic extrusion in 3D. The SVPs vote for a common object via intersection. By this means, a freeform object can be "assembled" from an unknown number of SVPs from arbitrary angles. Besides the intersection, two other criteria, i.e., coplanarity and color coherence, are integrated in the global optimization to improve the segmentation. Experiments demonstrate the potential of the proposed method.


Author(s):  
Arnaud Palha ◽  
Arnadi Murtiyoso ◽  
Jean-Christophe Michelin ◽  
Emmanuel Alby ◽  
Pierre Grussenmeyer

Author(s):  
Ceyhun Koc ◽  
Ozgun Pinarer ◽  
Sultan Turhan

Sign in / Sign up

Export Citation Format

Share Document