scholarly journals MULTIPLE USES OF A 3D POINT CLOUD: THE CASTLE OF FRANCHIMONT (PROVINCE OF LIÈGE, BELGIUM)

Author(s):  
A. Luczfalvy Jancsó ◽  
B. Jonlet ◽  
P. Hallot ◽  
P. Hoffsummer ◽  
R. Billen

This paper presents the identified obstacles, needs and selected solutions for the study of the medieval castle of Franchimont, located in the province of Liège (Belgium). After taking into account the requirements from all the disciplines at work as well as the problems that would have to be tackled, the creation of a 3D point cloud was decided. This solution would be able to deal with the characteristics and needs of a research involving building archaeology and related fields. The decision was made in order to manage all of the available data and to provide a common working tool for every involved cultural heritage actor. To achieve this, the elaboration of an Archaeological Information System based on 3D point clouds as a common virtual workspace is being taken into consideration.

Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 145
Author(s):  
Alessandra Capolupo

A proper classification of 3D point clouds allows fully exploiting data potentiality in assessing and preserving cultural heritage. Point cloud classification workflow is commonly based on the selection and extraction of respective geometric features. Although several research activities have investigated the impact of geometric features on classification outcomes accuracy, only a few works focused on their accuracy and reliability. This paper investigates the accuracy of 3D point cloud geometric features through a statistical analysis based on their corresponding eigenvalues and covariance with the aim of exploiting their effectiveness for cultural heritage classification. The proposed approach was separately applied on two high-quality 3D point clouds of the All Saints’ Monastery of Cuti (Bari, Southern Italy), generated using two competing survey techniques: Remotely Piloted Aircraft System (RPAS) Structure from Motion (SfM) and Multi View Stereo (MVS) techniques and Terrestrial Laser Scanner (TLS). Point cloud compatibility was guaranteed through re-alignment and co-registration of data. The geometric features accuracy obtained by adopting the RPAS digital photogrammetric and TLS models was consequently analyzed and presented. Lastly, a discussion on convergences and divergences of these results is also provided.


Author(s):  
M. Kawato ◽  
L. Li ◽  
K. Hasegawa ◽  
M. Adachi ◽  
H. Yamaguchi ◽  
...  

Abstract. Three-dimensional point clouds are becoming popular representations for digital archives of cultural heritage sites. The Borobudur Temple, located in Central Java, Indonesia, was built in the 8th century. Borobudur is considered one of the greatest Buddhist monuments in the world and was listed as a UNESCO World Heritage site. We are developing a virtual reality system as a digital archive of the Borobudur Temple. This research is a collaboration between Ritsumeikan University, Japan, the Indonesian Institute of Sciences (LIPI), and the Borobudur Conservation Office, Indonesia. In our VR system, the following three data sources are integrated to form a 3D point cloud: (1) a 3D point cloud of the overall shape of the temple acquired by photogrammetry using a camera carried by a UAV, (2) a 3D point cloud obtained from precise photogrammetric measurements of selected parts of the temple building, and (3) 3D data of the hidden relief panels recovered from the archived 2D monocular photos using deep learning. Our VR system supports both the first-person view and the bird’s eye view. The first-person view allows immersive observation and appreciation of the cultural heritage. The bird’s eye view is useful for understanding the whole picture. A user can easily switch between the two views by using a user-friendly VR user interface constructed by a 3D game engine.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1228
Author(s):  
Ting On Chan ◽  
Linyuan Xia ◽  
Yimin Chen ◽  
Wei Lang ◽  
Tingting Chen ◽  
...  

Ancient pagodas are usually parts of hot tourist spots in many oriental countries due to their unique historical backgrounds. They are usually polygonal structures comprised by multiple floors, which are separated by eaves. In this paper, we propose a new method to investigate both the rotational and reflectional symmetry of such polygonal pagodas through developing novel geometric models to fit to the 3D point clouds obtained from photogrammetric reconstruction. The geometric model consists of multiple polygonal pyramid/prism models but has a common central axis. The method was verified by four datasets collected by an unmanned aerial vehicle (UAV) and a hand-held digital camera. The results indicate that the models fit accurately to the pagodas’ point clouds. The symmetry was realized by rotating and reflecting the pagodas’ point clouds after a complete leveling of the point cloud was achieved using the estimated central axes. The results show that there are RMSEs of 5.04 cm and 5.20 cm deviated from the perfect (theoretical) rotational and reflectional symmetries, respectively. This concludes that the examined pagodas are highly symmetric, both rotationally and reflectionally. The concept presented in the paper not only work for polygonal pagodas, but it can also be readily transformed and implemented for other applications for other pagoda-like objects such as transmission towers.


Aerospace ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 94 ◽  
Author(s):  
Hriday Bavle ◽  
Jose Sanchez-Lopez ◽  
Paloma Puente ◽  
Alejandro Rodriguez-Ramos ◽  
Carlos Sampedro ◽  
...  

This paper presents a fast and robust approach for estimating the flight altitude of multirotor Unmanned Aerial Vehicles (UAVs) using 3D point cloud sensors in cluttered, unstructured, and dynamic indoor environments. The objective is to present a flight altitude estimation algorithm, replacing the conventional sensors such as laser altimeters, barometers, or accelerometers, which have several limitations when used individually. Our proposed algorithm includes two stages: in the first stage, a fast clustering of the measured 3D point cloud data is performed, along with the segmentation of the clustered data into horizontal planes. In the second stage, these segmented horizontal planes are mapped based on the vertical distance with respect to the point cloud sensor frame of reference, in order to provide a robust flight altitude estimation even in presence of several static as well as dynamic ground obstacles. We validate our approach using the IROS 2011 Kinect dataset available in the literature, estimating the altitude of the RGB-D camera using the provided 3D point clouds. We further validate our approach using a point cloud sensor on board a UAV, by means of several autonomous real flights, closing its altitude control loop using the flight altitude estimated by our proposed method, in presence of several different static as well as dynamic ground obstacles. In addition, the implementation of our approach has been integrated in our open-source software framework for aerial robotics called Aerostack.


Author(s):  
Wenju Wang ◽  
Tao Wang ◽  
Yu Cai

AbstractClassifying 3D point clouds is an important and challenging task in computer vision. Currently, classification methods using multiple views lose characteristic or detail information during the representation or processing of views. For this reason, we propose a multi-view attention-convolution pooling network framework for 3D point cloud classification tasks. This framework uses Res2Net to extract the features from multiple 2D views. Our attention-convolution pooling method finds more useful information in the input data related to the current output, effectively solving the problem of feature information loss caused by feature representation and the detail information loss during dimensionality reduction. Finally, we obtain the probability distribution of the model to be classified using a full connection layer and the softmax function. The experimental results show that our framework achieves higher classification accuracy and better performance than other contemporary methods using the ModelNet40 dataset.


Author(s):  
T. Shinohara ◽  
H. Xiu ◽  
M. Matsuoka

Abstract. This study introduces a novel image to a 3D point-cloud translation method with a conditional generative adversarial network that creates a large-scale 3D point cloud. This can generate supervised point clouds observed via airborne LiDAR from aerial images. The network is composed of an encoder to produce latent features of input images, generator to translate latent features to fake point clouds, and discriminator to classify false or real point clouds. The encoder is a pre-trained ResNet; to overcome the difficulty of generating 3D point clouds in an outdoor scene, we use a FoldingNet with features from ResNet. After a fixed number of iterations, our generator can produce fake point clouds that correspond to the input image. Experimental results show that our network can learn and generate certain point clouds using the data from the 2018 IEEE GRSS Data Fusion Contest.


2018 ◽  
Vol 9 (2) ◽  
pp. 37-53
Author(s):  
Sinh Van Nguyen ◽  
Ha Manh Tran ◽  
Minh Khai Tran

Building 3D objects or reconstructing their surfaces from 3D point cloud data are researched activities in the field of geometric modeling and computer graphics. In the recent years, they are also studied and used in some fields such as: graph models and simulation; image processing or restoration of digital heritages. This article presents an improved method for restoring the shape of 3D point cloud surfaces. The method is a combination of creating a Bezier surface patch and computing tangent plane of 3D points to fill holes on a surface of 3D point clouds. This method is described as follows: at first, a boundary for each hole on the surface is identified. The holes are then filled by computing Bezier curves of surface patches to find missing points. After that, the holes are refined based on two steps (rough and elaborate) to adjust the inserted points and preserve the local curvature of the holes. The contribution of the proposed method has been shown in processing time and the novelty of combined computation in this method has preserved the initial shape of the surface


Sensors ◽  
2019 ◽  
Vol 20 (1) ◽  
pp. 143
Author(s):  
Yubo Cui ◽  
Zheng Fang ◽  
Sifan Zhou

Person tracking is an important issue in both computer vision and robotics. However, most existing person tracking methods using 3D point cloud are based on the Bayesian Filtering framework which are not robust in challenging scenes. In contrast with the filtering methods, in this paper, we propose a neural network to cope with person tracking using only 3D point cloud, named Point Siamese Network (PSN). PSN consists of two input branches named template and search, respectively. After finding the target person (by reading the label or using a detector), we get the inputs of the two branches and create feature spaces for them using feature extraction network. Meanwhile, a similarity map based on the feature space is proposed between them. We can obtain the target person from the map. Furthermore, we add an attention module to the template branch to guide feature extraction. To evaluate the performance of the proposed method, we compare it with the Unscented Kalman Filter (UKF) on 3 custom labeled challenging scenes and the KITTI dataset. The experimental results show that the proposed method performs better than UKF in robustness and accuracy and has a real-time speed. In addition, we publicly release our collected dataset and the labeled sequences to the research community.


2020 ◽  
Vol 12 (6) ◽  
pp. 1005 ◽  
Author(s):  
Roberto Pierdicca ◽  
Marina Paolanti ◽  
Francesca Matrone ◽  
Massimo Martini ◽  
Christian Morbidoni ◽  
...  

In the Digital Cultural Heritage (DCH) domain, the semantic segmentation of 3D Point Clouds with Deep Learning (DL) techniques can help to recognize historical architectural elements, at an adequate level of detail, and thus speed up the process of modeling of historical buildings for developing BIM models from survey data, referred to as HBIM (Historical Building Information Modeling). In this paper, we propose a DL framework for Point Cloud segmentation, which employs an improved DGCNN (Dynamic Graph Convolutional Neural Network) by adding meaningful features such as normal and colour. The approach has been applied to a newly collected DCH Dataset which is publicy available: ArCH (Architectural Cultural Heritage) Dataset. This dataset comprises 11 labeled points clouds, derived from the union of several single scans or from the integration of the latter with photogrammetric surveys. The involved scenes are both indoor and outdoor, with churches, chapels, cloisters, porticoes and loggias covered by a variety of vaults and beared by many different types of columns. They belong to different historical periods and different styles, in order to make the dataset the least possible uniform and homogeneous (in the repetition of the architectural elements) and the results as general as possible. The experiments yield high accuracy, demonstrating the effectiveness and suitability of the proposed approach.


2010 ◽  
Vol 22 (2) ◽  
pp. 158-166 ◽  
Author(s):  
Taro Suzuki ◽  
◽  
Yoshiharu Amano ◽  
Takumi Hashizume

This paper describes outdoor localization for a mobile robot using a laser scanner and three-dimensional (3D) point cloud data. A Mobile Mapping System (MMS) measures outdoor 3D point clouds easily and precisely. The full six-dimensional state of a mobile robot is estimated combining dead reckoning and 3D point cloud data. Two-dimensional (2D) position and orientation are extended to 3D using 3D point clouds assuming that the mobile robot remains in continuous contact with the road surface. Our approach applies a particle filter to correct position error in the laser measurement model in 3D point cloud space. Field experiments were conducted to evaluate the accuracy of our proposal. As the result of the experiment, it was confirmed that a localization precision of 0.2 m (RMS) is possible using our proposal.


Sign in / Sign up

Export Citation Format

Share Document