scholarly journals INTERACTIVE 3D CITY VISUALIZATION FROM STRUCTURE MOTION DATA USING GAME ENGINE

Author(s):  
D. Laksono ◽  
T. Aditya ◽  
G. Riyadi

Abstract. Developing a 3D city model is always a challenging task, whether on how to obtain the 3D data or how to present the model to users. Lidar is often used to produce real-world measurement, resulting in point clouds which further processed into a 3D model. However, this method possesses some limitation, e.g. tedious, expensive works and high technicalities, which limits its usability in a smaller area. Currently, there exists pipeline utilize point-clouds from Lidar data to automate the generation of 3D city model. For example, 3dfier (http://github.com/tudelft3d/3dfier) is a software capable of generating LoD 1 3D city model from lidar point cloud data. The resulting CityGML file could further be used in a 3D GIS viewer to produce an interactive 3D city model. This research proposed the use of Structure from Motion (SfM) method to obtain point cloud from UAV data. Using SfM to generate point clouds means cheaper and shorter production time, as well as more suitable for smaller area compared to LiDAR. 3Dfier could be utilized to produce 3D model from the point cloud. Subsequently, a game engine, i.e. Unity 3D, is utilized as the visualization platform. Previous works shows that a game engine could be used as an interactive environment for exploring virtual world based on real-world measurement and other data, such as parcel boundaries. This works shows that the process of generating 3D city model could be achieved using the proposed pipeline.

2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Yuwei Chen ◽  
Lingli Zhu ◽  
Jian Tang ◽  
Ling Pei ◽  
Antero Kukko ◽  
...  

The positioning accuracy with good GNSS observation can easily reach centimetre level, supported by advanced GNSS technologies. However, it is still a challenge to offer a robust GNSS based positioning solution in a GNSS degraded area. The concept of GNSS shadow matching has been proposed to enhance the GNSS based position accuracy in city canyons, where the nearby high buildings block parts of the GNSS radio frequency (RF) signals. However, the results rely on the accuracy of the utilized ready-made 3D city model. In this paper, we investigate a solution to generate a GNSS shadow mask with mobile laser scanning (MLS) cloud data. The solution includes removal of noise points, determining the object which only attenuated the RF signal and extraction of the highest obstruction point, and eventually angle calculation for the GNSS shadow mask generation. By analysing the data with the proposed methodology, it is concluded that the MLS point cloud data can be used to extract the GNSS shadow mask after several steps of processing to filter out the hanging objects and the plantings without generating the accurate 3D model, which depicts the boundary of GNSS signal coverage more precisely in city canyon environments compared to traditional 3D models.


2018 ◽  
Vol 10 (9) ◽  
pp. 1412 ◽  
Author(s):  
Florent Poux ◽  
Romain Neuville ◽  
Gilles-Antoine Nys ◽  
Roland Billen

3D models derived from point clouds are useful in various shapes to optimize the trade-off between precision and geometric complexity. They are defined at different granularity levels according to each indoor situation. In this article, we present an integrated 3D semantic reconstruction framework that leverages segmented point cloud data and domain ontologies. Our approach follows a part-to-whole conception which models a point cloud in parametric elements usable per instance and aggregated to obtain a global 3D model. We first extract analytic features, object relationships and contextual information to permit better object characterization. Then, we propose a multi-representation modelling mechanism augmented by automatic recognition and fitting from the 3D library ModelNet10 to provide the best candidates for several 3D scans of furniture. Finally, we combine every element to obtain a consistent indoor hybrid 3D model. The method allows a wide range of applications from interior navigation to virtual stores.


Author(s):  
A. Hairuddin ◽  
S. Azri ◽  
U. Ujang ◽  
M. G. Cuétara ◽  
G. M. Retortillo ◽  
...  

Abstract. 3D city model is a representation of urban area in digital format that contains building and other information. The current approaches are using photogrammetry and laser scanning to develop 3D city model. However, these techniques are time consuming and quite costly. Besides that, laser scanning and photogrammetry need professional skills and expertise to handle hardware and tools. In this study, videogrammetry is proposed as a technique to develop 3D city model. This technique uses video frame sequences to generate point cloud. Videos are processed using EyesCloud3D by eCapture. EyesCloud3D allows user to upload raw data of video format to generate point clouds. There are five main phases in this study to generate 3D city model which are calibration, video recording, point cloud extraction, 3D modeling and 3D city model representation. In this study, 3D city model with Level of Detail 2 is produced. Simple query is performed from the database to retrieve the attributes of the 3D city model.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2021 ◽  
Vol 13 (11) ◽  
pp. 2195
Author(s):  
Shiming Li ◽  
Xuming Ge ◽  
Shengfu Li ◽  
Bo Xu ◽  
Zhendong Wang

Today, mobile laser scanning and oblique photogrammetry are two standard urban remote sensing acquisition methods, and the cross-source point-cloud data obtained using these methods have significant differences and complementarity. Accurate co-registration can make up for the limitations of a single data source, but many existing registration methods face critical challenges. Therefore, in this paper, we propose a systematic incremental registration method that can successfully register MLS and photogrammetric point clouds in the presence of a large number of missing data, large variations in point density, and scale differences. The robustness of this method is due to its elimination of noise in the extracted linear features and its 2D incremental registration strategy. There are three main contributions of our work: (1) the development of an end-to-end automatic cross-source point-cloud registration method; (2) a way to effectively extract the linear feature and restore the scale; and (3) an incremental registration strategy that simplifies the complex registration process. The experimental results show that this method can successfully achieve cross-source data registration, while other methods have difficulty obtaining satisfactory registration results efficiently. Moreover, this method can be extended to more point-cloud sources.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


2011 ◽  
Vol 299-300 ◽  
pp. 1091-1094 ◽  
Author(s):  
Jiang Zhu ◽  
Yuichi Takekuma ◽  
Tomohisa Tanaka ◽  
Yoshio Saito

Currently, design and processing of complicated model are enabled by the progress of the CAD/CAM system. In shape measurement, high precision measurement is performed using CMM. In order to evaluate the machined part, the designed model made by CAD system the point cloud data provided by the measurement system are analyzed and compared. Usually, the designed CAD model and measured point cloud data are made in the different coordinate systems, it is necessary to register those models in the same coordinate system for evaluation. In this research, a 3D model registration method based on feature extraction and iterative closest point (ICP) algorithm is proposed. It could efficiently and accurately register two models in different coordinate systems, and effectively avoid the problem of localized solution.


Aerospace ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 94 ◽  
Author(s):  
Hriday Bavle ◽  
Jose Sanchez-Lopez ◽  
Paloma Puente ◽  
Alejandro Rodriguez-Ramos ◽  
Carlos Sampedro ◽  
...  

This paper presents a fast and robust approach for estimating the flight altitude of multirotor Unmanned Aerial Vehicles (UAVs) using 3D point cloud sensors in cluttered, unstructured, and dynamic indoor environments. The objective is to present a flight altitude estimation algorithm, replacing the conventional sensors such as laser altimeters, barometers, or accelerometers, which have several limitations when used individually. Our proposed algorithm includes two stages: in the first stage, a fast clustering of the measured 3D point cloud data is performed, along with the segmentation of the clustered data into horizontal planes. In the second stage, these segmented horizontal planes are mapped based on the vertical distance with respect to the point cloud sensor frame of reference, in order to provide a robust flight altitude estimation even in presence of several static as well as dynamic ground obstacles. We validate our approach using the IROS 2011 Kinect dataset available in the literature, estimating the altitude of the RGB-D camera using the provided 3D point clouds. We further validate our approach using a point cloud sensor on board a UAV, by means of several autonomous real flights, closing its altitude control loop using the flight altitude estimated by our proposed method, in presence of several different static as well as dynamic ground obstacles. In addition, the implementation of our approach has been integrated in our open-source software framework for aerial robotics called Aerostack.


Sign in / Sign up

Export Citation Format

Share Document