scholarly journals ICE DETECTION ON AIRPLANE WINGS USING A PHOTOGRAMMETRIC POINT CLOUD: A SIMULATION

Author(s):  
I. Aicardi ◽  
A. Lingua ◽  
L. Mazzara ◽  
M. A. Musci ◽  
G. Rizzo

Abstract. This study describes some tests carried out, within the European project (reference call: MANUNET III 2018, project code: MNET18/ICT-3438) called SEI (Spectral Evidence of ice), for the geometrical ice detection on airplane wings. The purpose of these analysis is to estimate thickness and shape of the ice that an RGB sensor is able to detect on large aircrafts as Boeing 737-800. However, field testing are not available yet, therefore, in order to simulate the final configuration, a steel panel has been used to reproduce the aircraft surface. The adopted methodology consists in defining a reference surface and modelling its 3D shape with and without ice through photogrammetric acquisitions collected by a DJI Mavic Air drone hosting a RGB camera and processed by Agisoft Metashape software. The comparison among models with and without the ice has been presented and results show that it is possible to identify the ice, even though some noise still remains due to the geometric reconstruction itself. Finally, using 3dReshaper and Matlab software, the authors develop various analysis defining the operative limits, the processing time, the correct setting up of Metashape for a more accurate ice detection, the optimization of the methodology in terms of processing time, precision and completeness. The procedure can certainly be more reliable considering the usage of the hyperspectral sensor technique as future implementation.

2021 ◽  
Vol 13 (13) ◽  
pp. 2494
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

T-splines have recently been introduced to represent objects of arbitrary shapes using a smaller number of control points than the conventional non-uniform rational B-splines (NURBS) or B-spline representatizons in computer-aided design, computer graphics and reverse engineering. They are flexible in representing complex surface shapes and economic in terms of parameters as they enable local refinement. This property is a great advantage when dense, scattered and noisy point clouds are approximated using least squares fitting, such as those from a terrestrial laser scanner (TLS). Unfortunately, when it comes to assessing the goodness of fit of the surface approximation with a real dataset, only a noisy point cloud can be approximated: (i) a low root mean squared error (RMSE) can be linked with an overfitting, i.e., a fitting of the noise, and should be correspondingly avoided, and (ii) a high RMSE is synonymous with a lack of details. To address the challenge of judging the approximation, the reference surface should be entirely known: this can be solved by printing a mathematically defined T-splines reference surface in three dimensions (3D) and modeling the artefacts induced by the 3D printing. Once scanned under different configurations, it is possible to assess the goodness of fit of the approximation for a noisy and potentially gappy point cloud and compare it with the traditional but less flexible NURBS. The advantages of T-splines local refinement open the door for further applications within a geodetic context such as rigorous statistical testing of deformation. Two different scans from a slightly deformed object were approximated; we found that more than 40% of the computational time could be saved without affecting the goodness of fit of the surface approximation by using the same mesh for the two epochs.


2003 ◽  
Vol 2003 (1) ◽  
pp. 453-456
Author(s):  
Rhonda Arvidson ◽  
Stan Jones

ABSTRACT An extensive risk assessment of oil transportation in Prince William Sound, Alaska was finalized in 1996 that identified drifting icebergs, from Columbia Glacier, as one of the most significant oil spill risks remaining to be addressed. The Prince William Sound Regional Citizens’ Advisory Council (PWS RCAC) was a major participant in this risk analysis. As part of the groundwork for the ice detection project, PWS RCAC has also sponsered extensive studies of Columbia Glacier calving and drift patterns, iceberg size and distribution. A collaborative project, called the ice detection project, was developed by a multi stakeholder working group and provides an opportunity for an immediate and long-term solution using existing technology. One objective of the project is to verify the efficiency, effectiveness and reliability of existing radar technologies to provide mariners and the United States Coast Guard with real time information regarding ice conditions. A secondary objective is to promote the research and development through field testing of new and emerging technologies to determine the possible enhancement of conventional radar. In addition to PWS RCAC, stakeholders responsible for spearheading this project are: Alyeska Pipeline Service Company, Alaska Department of Environmental Conservation, Oil Spill Recovery Institute, United States Coast Guard, Prince William Sound Community College and National Oceanic and Atmospheric Administration. Each of the seven participants brings expertise and backing from the stakeholder they represent. The site chosen for the ice detection radar project is Reef Island (illustration 1), located adjacent to Bligh Reef, Prince William Sound. This location is ideal because of its proximity to Columbia Glacier, the source of the icebergs, as well as providing an unobstructed view of the shipping lanes. A fifty foot tower was installed at the site during the fall of 2001 and a conventional radar system is currently being configured for installation. The expectation is that the system will be up and running by July of 2002, giving real time information on ice in the tanker lanes to mariners in Prince William Sound. A second field test of an UHF radar prototype is planned for the summer of 2002. Field testing and ground truthing of the radar system is scheduled for the next five years.


Author(s):  
P. Tutzauer ◽  
N. Haala

This paper aims at façade reconstruction for subsequent enrichment of LOD2 building models. We use point clouds from dense image matching with imagery both from Mobile Mapping systems and oblique airborne cameras. The interpretation of façade structures is based on a geometric reconstruction. For this purpose a pre-segmentation of the point cloud into façade points and non-façade points is necessary. We present an approach for point clouds with limited geometric accuracy where a geometric segmentation might fail. Our contribution is a radiometric segmentation approach. Via local point features, based on a clustering in hue space, the point cloud is segmented into façade-points and non-façade points. This way, the initial geometric reconstruction step can be bypassed and point clouds with limited accuracy can still serve as input for the façade reconstruction and modelling approach.


Author(s):  
L. Díaz-Vilariño ◽  
E. Frías ◽  
J. Balado ◽  
H. González-Jorge

<p><strong>Abstract.</strong> Scan-to-BIM systems have been recently proposed for the dimensional and quality assessment of as-built construction components with planned works. The procedure is generally based on the geometric alignment and comparison of as-built laser scans with as-designed BIM models. A major concern in Scan-to-BIM procedures is point cloud quality in terms of data completeness and consequently, the scanning process should be designed in order to obtain a full coverage of the scene while avoiding major occlusions. This work proposes a method to optimize the number and scan positions for Scan-to-BIM procedures following stop &amp;amp; go scanning. The method is based on a visibility analysis using a <i>ray-tracing algorithm</i>. In addition, the optimal route between scan positions is formulated as a <i>travelling salesman problem</i> and solved using a suboptimal <i>ant colony optimization algorithm</i>. The distribution of candidate positions follows a grid-based structure, although other distributions based on triangulation or tessellation can be implemented to reduce the number of candidate positions and processing time.</p>


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 11
Author(s):  
Xing Xie ◽  
Lin Bai ◽  
Xinming Huang

LiDAR has been widely used in autonomous driving systems to provide high-precision 3D geometric information about the vehicle’s surroundings for perception, localization, and path planning. LiDAR-based point cloud semantic segmentation is an important task with a critical real-time requirement. However, most of the existing convolutional neural network (CNN) models for 3D point cloud semantic segmentation are very complex and can hardly be processed at real-time on an embedded platform. In this study, a lightweight CNN structure was proposed for projection-based LiDAR point cloud semantic segmentation with only 1.9 M parameters that gave an 87% reduction comparing to the state-of-the-art networks. When evaluated on a GPU, the processing time was 38.5 ms per frame, and it achieved a 47.9% mIoU score on Semantic-KITTI dataset. In addition, the proposed CNN is targeted on an FPGA using an NVDLA architecture, which results in a 2.74x speedup over the GPU implementation with a 46 times improvement in terms of power efficiency.


Author(s):  
M. Pulcrano ◽  
S. Scandurra ◽  
G. Minin ◽  
A. di Luggo

<p><strong>Abstract.</strong> Photography has always been considered as a valid tool to acquire information about reality. Nowadays, its versatility, together with the development of new techniques and technologies, allows to use it in different fields of application. Particularly, in the digitization of built heritage, photography not only enables to understand and document historical and architectural artifacts but also to acquire morphological and geometrical data about them with automated digital photogrammetry. Nowadays, photogrammetry enables many tools to give virtual casts of reality by showing it in the way of point cloud. Although they can have metric reliability and visual quality, traditional instruments &amp;ndash; such as monoscopic cameras &amp;ndash; involve a careful planning of the campaign phase and a long acquisition and processing time. On the contrary, the most recent ones, based on the integration of different sensors and cameras, try to reduce the gap between time and results. The latter include some systems of indoor mapping who, thanks to 360&amp;deg; acquisitions and SLAM technology, reconstruct the original scene in real time in great detail and with a photorealistic rendering. This study is aimed at reporting a research evaluating metric reliability and the level of survey detail with a Matterport Pro2 3D motorized rotating camera, equipped with SLAM technology, whose results have been compared with point clouds obtained by image-based and range-based processes.</p>


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Linh Truong-Hong ◽  
Roderik Lindenbergh ◽  
Thu Anh Nguyen

PurposeTerrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation strongly depends on quality of each step of a workflow, which are not fully addressed. This study aims to give insight error of these steps, and results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. Thus, the main contributions of the paper are investigating point cloud registration error affecting resulting deformation estimation, identifying an appropriate segmentation method used to extract data points of a deformed surface, investigating a methodology to determine an un-deformed or a reference surface for estimating deformation, and proposing a methodology to minimize the impact of outlier, noisy data and/or mixed pixels on deformation estimation.Design/methodology/approachIn practice, the quality of data point clouds and of surface extraction strongly impacts on resulting deformation estimation based on laser scanning point clouds, which can cause an incorrect decision on the state of the structure if uncertainty is available. In an effort to have more comprehensive insight into those impacts, this study addresses four issues: data errors due to data registration from multiple scanning stations (Issue 1), methods used to extract point clouds of structure surfaces (Issue 2), selection of the reference surface Sref to measure deformation (Issue 3), and available outlier and/or mixed pixels (Issue 4). This investigation demonstrates through estimating deformation of the bridge abutment, building and an oil storage tank.FindingsThe study shows that both random sample consensus (RANSAC) and region growing–based methods [a cell-based/voxel-based region growing (CRG/VRG)] can be extracted data points of surfaces, but RANSAC is only applicable for a primary primitive surface (e.g. a plane in this study) subjected to a small deformation (case study 2 and 3) and cannot eliminate mixed pixels. On another hand, CRG and VRG impose a suitable method applied for deformed, free-form surfaces. In addition, in practice, a reference surface of a structure is mostly not available. The use of a fitting plane based on a point cloud of a current surface would cause unrealistic and inaccurate deformation because outlier data points and data points of damaged areas affect an accuracy of the fitting plane. This study would recommend the use of a reference surface determined based on a design concept/specification. A smoothing method with a spatial interval can be effectively minimize, negative impact of outlier, noisy data and/or mixed pixels on deformation estimation.Research limitations/implicationsDue to difficulty in logistics, an independent measurement cannot be established to assess the deformation accuracy based on TLS data point cloud in the case studies of this research. However, common laser scanners using the time-of-flight or phase-shift principle provide point clouds with accuracy in the order of 1–6 mm, while the point clouds of triangulation scanners have sub-millimetre accuracy.Practical implicationsThis study aims to give insight error of these steps, and the results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds.Social implicationsThe results of this study would provide guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. A low-cost method can be applied for deformation analysis of the structure.Originality/valueAlthough a large amount of the studies used laser scanning to measure structure deformation in the last two decades, the methods mainly applied were to measure change between two states (or epochs) of the structure surface and focused on quantifying deformation-based TLS point clouds. Those studies proved that a laser scanner could be an alternative unit to acquire spatial information for deformation monitoring. However, there are still challenges in establishing an appropriate procedure to collect a high quality of point clouds and develop methods to interpret the point clouds to obtain reliable and accurate deformation, when uncertainty, including data quality and reference information, is available. Therefore, this study demonstrates the impact of data quality in a term of point cloud registration error, selected methods for extracting point clouds of surfaces, identifying reference information, and available outlier, noisy data and/or mixed pixels on deformation estimation.


Robotica ◽  
2020 ◽  
pp. 1-23
Author(s):  
Otacílio de Araújo Ramos Neto ◽  
Abel Cavalcante Lima Filho ◽  
Tiago P. Nascimento

SUMMARY Visual simultaneous localization and mapping (VSLAM) is a relevant solution for vehicle localization and mapping environments. However, it is computationally expensive because it demands large computational effort, making it a non-real-time solution. The VSLAM systems that employ geometric reconstructions are based on the parallel processing paradigm developed in the Parallel Tracking and Mapping (PTAM) algorithm. This type of system was created for processors that have exactly two cores. The various SLAM methods based on the PTAM were also not designed to scale to all the cores of modern processors nor to function as a distributed system. Therefore, we propose a modification to the pipeline for the execution of well-known VSLAM systems so that they can be scaled to all available processors during execution, thereby increasing their performance in terms of processing time. We explain the principles behind this modification via a study of the threads in the SLAM systems based on PTAM. We validate our results with experiments describing the behavior of the original ORB-SLAM system and the modified version.


Author(s):  
S. N. Mohd Isa ◽  
S. A. Abdul Shukor ◽  
N. A. Rahim ◽  
I. Maarof ◽  
Z. R. Yahya ◽  
...  

Abstract. In this paper, pairwise coarse registration is presented using real world point cloud data obtained by terrestrial laser scanner and without information on reference marker on the scene. The challenge in the data is because of multi-scanning which caused large data size in millions of points due to limited range about the scene generated from side view. Furthermore, the data have a low percentage of overlapping between two scans, and the point cloud data were acquired from structures with geometrical symmetry which leads to minimal transformation during registration process. To process the data, 3D Harris keypoint is used and coarse registration is done by Iterative Closest Point (ICP). Different sampling methods were applied in order to evaluate processing time for further analysis on different voxel grid size. Then, Root Means Squared Error (RMSE) is used to determine the accuracy of the approach and to study its relation to relative orientation of scan by pairwise registration. The results show that the grid average downsampling method gives shorter processing time with reasonable RMSE in finding the exact scan pair. It can also be seen that grid step size is having an inverse relationship with downsampling points. This setting is used to test on smaller overlapping data set of other heritage building. Evaluation on relative orientation is studied from transformation parameter for both data set, where Data set I, which higher overlapping data gives better accuracy which may be due to the small distance between the two point clouds compared to Data set II.


2019 ◽  
Vol 53 (2) ◽  
pp. 487-504 ◽  
Author(s):  
Abdul Rahman El Sayed ◽  
Abdallah El Chakik ◽  
Hassan Alabboud ◽  
Adnan Yassine

Many computer vision approaches for point clouds processing consider 3D simplification as an important preprocessing phase. On the other hand, the big amount of point cloud data that describe a 3D object require excessively a large storage and long processing time. In this paper, we present an efficient simplification method for 3D point clouds using weighted graphs representation that optimizes the point clouds and maintain the characteristics of the initial data. This method detects the features regions that describe the geometry of the surface. These features regions are detected using the saliency degree of vertices. Then, we define features points in each feature region and remove redundant vertices. Finally, we will show the robustness of our methodviadifferent experimental results. Moreover, we will study the stability of our method according to noise.


Sign in / Sign up

Export Citation Format

Share Document