scholarly journals APPLICATIONS OF PANORAMIC IMAGES: FROM 720° PANORAMA TO INTERIOR 3D MODELS OF AUGMENTED REALITY

Author(s):  
I.-C. Lee ◽  
F. Tsai

A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. <br><br> In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. <br><br> The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.

2019 ◽  
Vol 11 (10) ◽  
pp. 1204 ◽  
Author(s):  
Yue Pan ◽  
Yiqing Dong ◽  
Dalei Wang ◽  
Airong Chen ◽  
Zhen Ye

Three-dimensional (3D) digital technology is essential to the maintenance and monitoring of cultural heritage sites. In the field of bridge engineering, 3D models generated from point clouds of existing bridges is drawing increasing attention. Currently, the widespread use of the unmanned aerial vehicle (UAV) provides a practical solution for generating 3D point clouds as well as models, which can drastically reduce the manual effort and cost involved. In this study, we present a semi-automated framework for generating structural surface models of heritage bridges. To be specific, we propose to tackle this challenge via a novel top-down method for segmenting main bridge components, combined with rule-based classification, to produce labeled 3D models from UAV photogrammetric point clouds. The point clouds of the heritage bridge are generated from the captured UAV images through the structure-from-motion workflow. A segmentation method is developed based on the supervoxel structure and global graph optimization, which can effectively separate bridge components based on geometric features. Then, recognition by the use of a classification tree and bridge geometry is utilized to recognize different structural elements from the obtained segments. Finally, surface modeling is conducted to generate surface models of the recognized elements. Experiments using two bridges in China demonstrate the potential of the presented structural model reconstruction method using UAV photogrammetry and point cloud processing in 3D digital documentation of heritage bridges. By using given markers, the reconstruction error of point clouds can be as small as 0.4%. Moreover, the precision and recall of segmentation results using testing date are better than 0.8, and a recognition accuracy better than 0.8 is achieved.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3952 ◽  
Author(s):  
* ◽  
*

Three Dimensional (3D) models are widely used in clinical applications, geosciences, cultural heritage preservation, and engineering; this, together with new emerging needs such as building information modeling (BIM) develop new data capture techniques and devices with a low cost and reduced learning curve that allow for non-specialized users to employ it. This paper presents a simple, self-assembly device for 3D point clouds data capture with an estimated base price under €2500; furthermore, a workflow for the calculations is described that includes a Visual SLAM-photogrammetric threaded algorithm that has been implemented in C++. Another purpose of this work is to validate the proposed system in BIM working environments. To achieve it, in outdoor tests, several 3D point clouds were obtained and the coordinates of 40 points were obtained by means of this device, with data capture distances ranging between 5 to 20 m. Subsequently, those were compared to the coordinates of the same targets measured by a total station. The Euclidean average distance errors and root mean square errors (RMSEs) ranging between 12–46 mm and 8–33 mm respectively, depending on the data capture distance (5–20 m). Furthermore, the proposed system was compared with a commonly used photogrammetric methodology based on Agisoft Metashape software. The results obtained demonstrate that the proposed system satisfies (in each case) the tolerances of ‘level 1’ (51 mm) and ‘level 2’ (13 mm) for point cloud acquisition in urban design and historic documentation, according to the BIM Guide for 3D Imaging (U.S. General Services).


2021 ◽  
Vol 11 (13) ◽  
pp. 5941
Author(s):  
Mun-yong Lee ◽  
Sang-ha Lee ◽  
Kye-dong Jung ◽  
Seung-hyun Lee ◽  
Soon-chul Kwon

Computer-based data processing capabilities have evolved to handle a lot of information. As such, the complexity of three-dimensional (3D) models (e.g., animations or real-time voxels) containing large volumes of information has increased exponentially. This rapid increase in complexity has led to problems with recording and transmission. In this study, we propose a method of efficiently managing and compressing animation information stored in the 3D point-clouds sequence. A compressed point-cloud is created by reconfiguring the points based on their voxels. Compared with the original point-cloud, noise caused by errors is removed, and a preprocessing procedure that achieves high performance in a redundant processing algorithm is proposed. The results of experiments and rendering demonstrate an average file-size reduction of 40% using the proposed algorithm. Moreover, 13% of the over-lap data are extracted and removed, and the file size is further reduced.


Author(s):  
S. Barba ◽  
M. Barbarella ◽  
A. Di Benedetto ◽  
M. Fiani ◽  
M. Limongiello

<p><strong>Abstract.</strong> In the field of archaeological surveying, remote sensors and especially photogrammetric and laser scanner systems are widely used to create 3D models. The use of photogrammetric surveying with UAVs (Unmanned Aerial Vehicles), combined with Computer Vision algorithms, allows the building of three-dimensional models, characterized by photo-realistic textures. The choice of which method to use mainly depends on the complexity of the investigated site, the accuracy requirements and the available budget and time. The different components of the UAV system determine its characteristics in terms of performance and accuracy, therefore define its quality and the cost too. This study presents an assessment of the accuracy of point clouds derived by two UAV systems, a commercial quadcopter (DJI Phantom 3 Professional), a professional assembled hexacopter, and by a TLS (Terrestrial Laser Scanner) in order to compare photogrammetric and laser scanner data for archaeological applications. In this paper, we present a case study to compare and analyse the metric accuracy of the point clouds and the distribution of the GCPs (Ground Control Points). This accuracy assessment will serve to quantify the uncertainty in the absolute position of the GCPs, identified on the panoramic images in the absence of artificial targets. Executed experiments showed that in tested UAVs, the choice of the GCPs has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 5&amp;thinsp;cm.</p>


Author(s):  
R. Argiolas ◽  
A. Cazzani ◽  
E. Reccia ◽  
V. Bagnolo

<p><strong>Abstract.</strong> In HBIM processes, the extraction of geometric components from 3D point clouds data can sometimes be a complex process. The so-called <q>Scan to BIM</q> process has been widely utilized: deriving 3D models from point clouds often a local modelling of geometric components is necessary. This leads in most cases to use external modelling tools or complex local modelling processes. In both cases, we often get a model that cannot be reused for other items belonging to the same category, contravening the BIM philosophy. Vaulted systems are a typical example of complex elements that we can find in historical architecture. The paper presents the first results of an ongoing research on geometric modelling and structural evaluation of masonry ribbed vaults. An algorithm is developed to generate a NURBS surface of masonry vaults that, starting from the data extrapolated from the point cloud, allows to obtain an HBIM family. The research aims to overcome the inability to reference to standardised objects in local modelling of historical architecture elements. Directed to a standardization in the geometric modelling process of 3D laser scan data, the developed workflow is a possible alternative to commonly used workflows. Particular attention is focused on a case study of stellar vaults, a special class of masonry ribbed vaults whose three-dimensional geometry features a star-shaped projection on the horizontal plane. The work is carried out to verify that this family can be used for the structural analysis of stellar masonry vaults.</p>


Author(s):  
Y. Zhou ◽  
Z. Dong ◽  
P. Tong ◽  
B. Yang

Abstract. The quality of tunnel excavation is evaluated by comparing the excavated tunnel and the design model. Terrestrial laser scanning (TLS) provides surveyors with dense and accurate three-dimensional (3D) point clouds for excavation model reconstruction. However, sufficient attention has not been paid to incorporating design models for tunnel point cloud processing. In this paper, a technical framework that combines TLS point clouds and the design model for tunnel excavation evaluation is proposed. Firstly, the point clouds are sliced into cross-sections and the feature points are accordingly extracted. Then, considering the structure of the design model, feature point deficiencies are repaired by topological and parametric model interpolation. Finally, the excavation quality is evaluated in terms of the deviation of centerlines and 3D models. This method is validated in the case study. Experiments show that the deviation of centerline azimuth is acceptable but there remain considerable overbreak and underbreak, which respectively account for 20.6% and 11.2% of the design excavation volume.


Author(s):  
P.M.B. Torres ◽  
P. J. S. Gonçalves ◽  
J.M.M. Martins

Purpose – The purpose of this paper is to present a robotic motion compensation system, using ultrasound images, to assist orthopedic surgery. The robotic system can compensate for femur movements during bone drilling procedures. Although it may have other applications, the system was thought to be used in hip resurfacing (HR) prosthesis surgery to implant the initial guide tool. The system requires no fiducial markers implanted in the patient, by using only non-invasive ultrasound images. Design/methodology/approach – The femur location in the operating room is obtained by processing ultrasound (USA) and computer tomography (CT) images, obtained, respectively, in the intra-operative and pre-operative scenarios. During surgery, the bone position and orientation is obtained by registration of USA and CT three-dimensional (3D) point clouds, using an optical measurement system and also passive markers attached to the USA probe and to the drill. The system description, image processing, calibration procedures and results with simulated and real experiments are presented and described to illustrate the system in operation. Findings – The robotic system can compensate for femur movements, during bone drilling procedures. In most experiments, the update was always validated, with errors of 2 mm/4°. Originality/value – The navigation system is based entirely on the information extracted from images obtained from CT pre-operatively and USA intra-operatively. Contrary to current surgical systems, it does not use any type of implant in the bone to track the femur movements.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


2019 ◽  
Vol 93 (3) ◽  
pp. 411-429 ◽  
Author(s):  
Maria Immacolata Marzulli ◽  
Pasi Raumonen ◽  
Roberto Greco ◽  
Manuela Persia ◽  
Patrizia Tartarino

Abstract Methods for the three-dimensional (3D) reconstruction of forest trees have been suggested for data from active and passive sensors. Laser scanner technologies have become popular in the last few years, despite their high costs. Since the improvements in photogrammetric algorithms (e.g. structure from motion—SfM), photographs have become a new low-cost source of 3D point clouds. In this study, we use images captured by a smartphone camera to calculate dense point clouds of a forest plot using SfM. Eighteen point clouds were produced by changing the densification parameters (Image scale, Point density, Minimum number of matches) in order to investigate their influence on the quality of the point clouds produced. In order to estimate diameter at breast height (d.b.h.) and stem volumes, we developed an automatic method that extracts the stems from the point cloud and then models them with cylinders. The results show that Image scale is the most influential parameter in terms of identifying and extracting trees from the point clouds. The best performance with cylinder modelling from point clouds compared to field data had an RMSE of 1.9 cm and 0.094 m3, for d.b.h. and volume, respectively. Thus, for forest management and planning purposes, it is possible to use our photogrammetric and modelling methods to measure d.b.h., stem volume and possibly other forest inventory metrics, rapidly and without felling trees. The proposed methodology significantly reduces working time in the field, using ‘non-professional’ instruments and automating estimates of dendrometric parameters.


Author(s):  
A. Al-Rawabdeh ◽  
H. Al-Gurrani ◽  
K. Al-Durgham ◽  
I. Detchev ◽  
F. He ◽  
...  

Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs over the period of one year (i.e., May 2014 and May 2015). Note that due to the coarse accuracy of the on-board GPS receiver (e.g., +/- 5-10 m) the geo-tagged positions of the images were only used as initial values in the bundle block adjustment. Normal distances, signifying detected changes, varying from 20 cm to 4 m were identified between the two epochs. The accuracy of the co-registered surfaces was estimated by comparing non-active patches within the monitored area of interest. Since these non-active sub-areas are stationary, the computed normal distances should theoretically be close to zero. The quality control of the registration results showed that the average normal distance was approximately 4 cm, which is within the noise level of the reconstructed surfaces.


Sign in / Sign up

Export Citation Format

Share Document