scholarly journals Distance–Intensity Image Strategy for Pulsed LiDAR Based on the Double-Scale Intensity-Weighted Centroid Algorithm

2021 ◽  
Vol 13 (3) ◽  
pp. 432
Author(s):  
Shiyu Yan ◽  
Guohui Yang ◽  
Qingyan Li ◽  
Bin Zhang ◽  
Yu Wang ◽  
...  

We report on a self-adaptive waveform centroid algorithm that combines the selection of double-scale data and the intensity-weighted (DSIW) method for accurate LiDAR distance–intensity imaging. A time window is set to adaptively select the effective data. At the same time, the intensity-weighted method can reduce the influence of sharp noise on the calculation. The horizontal and vertical coordinates of the centroid point obtained by the proposed algorithm are utilized to record the distance and echo intensity information, respectively. The proposed algorithm was experimentally tested, achieving an average ranging error of less than 0.3 ns under the various noise conditions in the listed tests, thus exerting better precision compared to the digital constant fraction discriminator (DCFD) algorithm, peak (PK) algorithm, Gauss fitting (GF) algorithm, and traditional waveform centroid (TC) algorithm. Furthermore, the proposed algorithm is fairly robust, with remarkably successful ranging rates of above 97% in all tests in this paper. Furthermore, the laser echo intensity measured by the proposed algorithm was proved to be robust to noise and to work in accordance with the transmission characteristics of LiDAR. Finally, we provide a distance–intensity point cloud image calibrated by our algorithm. The empirical findings in this study provide a new understanding of using LiDAR to draw multi-dimensional point cloud images.

2013 ◽  
Vol 10 (6) ◽  
pp. 723-738
Author(s):  
Eunho Shin ◽  
Hyoungshick Kim ◽  
Ji Won Yoon

2011 ◽  
Vol 30 (6) ◽  
pp. E5 ◽  
Author(s):  
E. Jesus Duffis ◽  
Zaid Al-Qudah ◽  
Charles J. Prestigiacomo ◽  
Chirag Gandhi

Early treatment of ischemic stroke with thrombolytics is associated with improved outcomes, but few stroke patients receive thrombolytic treatment in part due to the 3-hour time window. Advances in neuroimaging may help to aid in the selection of patients who may still benefit from thrombolytic treatment beyond conventional time-based guidelines. In this article the authors review the available literature in support of using advanced neuroimaging to select patients for treatment beyond the 3-hour time window cutoff and explore potential applications and limitations of perfusion imaging in the treatment of acute ischemic stroke.


Author(s):  
G. Sithole ◽  
L. Majola

The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets) it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. <br><br> Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.


2021 ◽  
Author(s):  
Kamran Shahid

Future autonomous satellite repair missions would benefit from higher accuracy pose estimates of target satellites. Constraint analysis provides a sensitivity index which can be used as a registration accuracy predictor. It was shown that point cloud configurations with higher values of this index returned more accurate pose estimates than unstable configurations with lower index values. Registration tests were conducted on four satellite geometries using synthetic range data. These results elucidate a means of determining the optimal scanning area of a given satellite for registration with the Iterative Closest Point (ICP) algorithm to return a highly accurate pose estimate.


2013 ◽  
Vol 397-400 ◽  
pp. 1083-1087
Author(s):  
Guang Shuai Liu ◽  
Bai Lin Li

Obtaining effective value points is one of key issues in cubic B-spline curve reconstruction. Since it is unfavorable for the selection of value points through curvature methods and the point cloud data acquired from ICT slice images is characterized with large volume of data, high noise and density, a baseline adaptive method is presented to get value points for curve reconstruction, baseline and scale threshold determined by wavelet multi-scale, in which the value points is obtained and curve is reconstructed automatically. Hausdorff distance is adopted to calculate the error of cubic B-spline curve reconstruction. Comparative analysis with existing methods proves that our method can effectively restrain noise and quickly reconstruct contour curves.


2020 ◽  
Vol 9 (1) ◽  
pp. 5
Author(s):  
Miguel Martin-Abadal ◽  
Manuel Piñar-Molina ◽  
Antoni Martorell-Torres ◽  
Gabriel Oliver-Codina ◽  
Yolanda Gonzalez-Cid

During the past few decades, the need to intervene in underwater scenarios has grown due to the increasing necessity to perform tasks like underwater infrastructure inspection and maintenance or archaeology and geology exploration. In the last few years, the usage of Autonomous Underwater Vehicles (AUVs) has eased the workload and risks of such interventions. To automate these tasks, the AUVs have to gather the information of their surroundings, interpret it and make decisions based on it. The two main perception modalities used at close range are laser and video. In this paper, we propose the usage of a deep neural network to recognise pipes and valves in multiple underwater scenarios, using 3D RGB point cloud information provided by a stereo camera. We generate a diverse and rich dataset for the network training and testing, assessing the effect of a broad selection of hyperparameters and values. Results show F1-scores of up to 97.2% for a test set containing images with similar characteristics to the training set and up to 89.3% for a secondary test set containing images taken at different environments and with distinct characteristics from the training set. This work demonstrates the validity and robust training of the PointNet neural in underwater scenarios and its applicability for AUV intervention tasks.


2016 ◽  
Vol 58 (6) ◽  
Author(s):  
Dimitris N. Fotiadis ◽  
Sotiria D. Matta ◽  
Stamatis S. Kouris

<p>After a historical introduction on the first well-documented observations of ionospheric phenomena and a review of the current, state-of-the art polar ionospheric studies, a climatological morphology of the irregular F-region plasma structures at high and polar latitudes is being presented, following a feature-aided pattern recognition method. Using the available in three solar cycles hourly <em>f</em><sub>o</sub>F2 data from 18 ionosonde stations, an ionospheric definition of disturbed conditions, independent of any causative mechanism, is being applied and positive/negative disturbances of duration smaller than 24 hours are sorted out. No latitudinal/longitudinal bins or seasons are defined and disturbances in each month and station are handled separately while four local time intervals of storm commencement are considered, according to solar zenith angle. A median profile per disturbance is produced only when a minimum occurrence probability is satisfied. Non-systematic features are excluded from this analysis by careful selection of the time window under morphological investigation. First, the median profiles of disturbance patterns are fitted to standard distributions and then, if they fail, they are grouped according to their major characteristic features and are described by a dynamic variation envelope along with their distribution in space and time. The present model, while being a non-conditional stand-alone model of ionospheric storms at high and polar latitudes offered to radio users, may complement existing empirical models. Finally, the present model may ultimately reveal cause-effect relationships with geomagnetic field or interplanetary parameters after further correlation studies undertaken in the future.</p>


Author(s):  
M. Vlachos ◽  
L. Berger ◽  
R. Mathelier ◽  
P. Agrafiotis ◽  
D. Skarlatos

<p><strong>Abstract.</strong> This paper presents an investigation as to whether and how the selection of the SfM-MVS software affects the 3D reconstruction of submerged archaeological sites. Specifically, Agisoft Photoscan, VisualSFM, SURE, 3D Zephyr and Reality Capture software were used and evaluated according to their performance in 3D reconstruction using specific metrics over the reconstructed underwater scenes. It must be clarified that the scope of this study is not to evaluate specific algorithms or steps that the various software use, but to evaluate the final results and specifically the generated 3D point clouds. To address the above research issues, a dataset from the ancient shipwreck, laying at 45 meters below sea level, is used. The dataset is composed of 19 images having very small camera to object distance (1 meter), and 42 images with higher camera to object distance (3 meters) images. Using a common bundle adjustment for all 61 images, a reference point cloud resulted from the lower dataset is used to compare it with the point clouds of the higher dataset generated using the different photogrammetric packages. Following that, a comparison regarding the number of total points, cloud to cloud distances, surface roughness, surface density and a combined 3D metric was done to evaluate and see which one performed the best.</p>


Author(s):  
F. Condorelli ◽  
R. Higuchi ◽  
S. Nasu ◽  
F. Rinaudo ◽  
H. Sugawara

Abstract. The use of Structure-from-Motion algorithms is a common practice to obtain a rapid photogrammetric reconstruction. However, the performance of these algorithms is limited by the fact that in some conditions the resulting point clouds present low density. This is the case when processing materials from historical archives, such as photographs and videos, which generates only sparse point clouds due to the lack of necessary information in the photogrammetric reconstruction. This paper explores ways to improve the performance of open source SfM algorithms in order to guarantee the presence of strategic feature points in the resulting point cloud, even if sparse. To reach this objective, a photogrammetric workflow is proposed to process historical images. The first part of the workflow presents a method that allows the manual selection of feature points during the photogrammetric process. The second part evaluates the metric quality of the reconstruction on the basis of a comparison with a point cloud that has a different density from the sparse point cloud. The workflow was applied to two different case studies. Transformations of wall paintings of the Karanlık church in Cappadocia were analysed thanks to the comparison of 3D model resulting from archive photographs and a recent survey. Then a comparison was performed between the state of the Komise building in Japan, before and after restoration. The findings show that the method applied allows the metric scale and evaluation of the model also in bad condition and when only low-density point clouds are available. Moreover, this tool should be of great use for both art and architecture historians and geomatics experts, to study the evolution of Cultural Heritage.


Sign in / Sign up

Export Citation Format

Share Document