Edge Extraction by Merging the 3D Point Cloud and 2D Image Data

Author(s):  
Ying Wang ◽  
Daniel Ewert ◽  
Daniel Schilberg ◽  
Sabina Jeschke
2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Jianjun Hao ◽  
Luyao Liu ◽  
Wei Chen

Any signal transmitted over an air-to-ground channel is corrupted by fading, noise, and interference. In this paper, a Polar-coded 3D point cloud image transmission system with fading channel is modeled, and also the simulation is performed to verify its performance in terms of 3D point cloud image data transmission over Rician channel with Gaussian white noise and overlap of Gaussian white noise + periodic pulse jamming separately. The comparison of Polar-coded scheme with RS-coded scheme in the same scenario indicates that Polar-coded system gives far better performance against AWGN noise and fading than the RS-coded system does in the case of short block length. But RS-coded scheme shows better performance on antipulse jamming than that of Polar-coded scheme, while there is no interleaving between codewords.


2019 ◽  
Vol 1267 ◽  
pp. 012015
Author(s):  
Yongzhi Wang ◽  
Zhijiang Du ◽  
Yongzhuo Gao ◽  
Mingyang Li ◽  
Wei Dong

2021 ◽  
Author(s):  
Vivien Zahs ◽  
Benjamin Herfort ◽  
Julia Kohns ◽  
Tahira Ullah ◽  
Katharina Anders ◽  
...  

<div> <p>Timely and reliable information on earthquake-induced building damage plays a critical role for the effective planning of rescue and remediation actions. Automatic damage assessment based on the analysis of 3D point cloud (e.g. from photogrammetry or LiDAR) or georeferenced image data can provide fast and objective information on the damage situation within few hours. So far, studies are often limited to the distinction of only two damage classes (e.g. damaged or not damaged) and to information provided by 2D image data. Beyond-binary assessment of multiple grades of damage is challenging, e.g. due to the variety of damage characteristics and the limited transferability of trained algorithms to unseen data and other geographic regions. The detailed damage assessment based on full 3D information is, however, required to enable efficient use and distribution of resources and for evaluation of structural stability of buildings. Further, the identification of slightly damaged buildings is essential to estimate the vulnerability for severe damage in potential aftershock events.</p> <p>In our work, we propose an interdisciplinary approach for timely and reliable assessment of multiple building-specific damage grades (0-5) from post- (and pre-) event UAV point clouds and images with high resolution (centimeter point spacing or pixel size). We combine expert knowledge of earthquake engineers with fully automatic damage classification and human visual interpretation from web-based crowdsourcing. While automatic approaches enable an objective and fast analysis of large 3D data, the ability of humans to visually interpret details in the data can be used as (1) validation of the automatic classification and (2) alternative method where the automatic approach showed high levels of uncertainty.</p> <p>We develop a damage catalogue that categorizes typical geometric and radiometric damage patterns for each damage grade. Therein, we consider influences of building material and region-specific building design on damage characteristics. Moreover, damage patterns include observations of previous earthquakes to ensure practical applicability. The catalogue serves as decision basis for the automatic classification of building-specific damage using machine learning, on the one hand. On the other hand, the catalogue is used to design quick and easy single damage mapping tasks that can be solved by volunteers within seconds (Micro-Mapping, Herfort et al. 2018). A further novelty of our approach consists in the combination of strengths of machine learning approaches for point cloud-based damage classification and visual interpretation by human contributors through Micro-Mapping tasks. The optimal combination of operation and weighted fusion of both methods is thereby dependent on event-specific conditions (e.g. data availability and quality, temporal constraints, spatial scale, extent of damage). </p> <p>By considering observations from previous earthquakes and influences of building design and structure on potential damage characteristics, our approach shall be applicable to events in different geographic regions. By the combination of automated and crowdsourcing methods, reliable and detailed damage information at the scale of large cities shall be provided within a few days. </p> </div><div> <p> </p> <div> <p>References</p> <p>Herfort, B., Höfle, B. & Klonner, C. (2018): 3D micro-mapping: Towards assessing the quality of crowdsourcing to support 3D point cloud analysis. ISPRS Journal of Photogrammetry and Remote Sensing. Vol. 137, pp. 73-83.</p> </div> </div>


Author(s):  
D. Mader ◽  
R. Blaskow ◽  
P. Westfeld ◽  
H.-G. Maas

The Project ADFEX (Adaptive Federative 3D Exploration of Multi Robot System) pursues the goal to develop a time- and cost-efficient system for exploration and monitoring task of unknown areas or buildings. A fleet of unmanned aerial vehicles equipped with appropriate sensors (laser scanner, RGB camera, near infrared camera, thermal camera) were designed and built. A typical operational scenario may include the exploration of the object or area of investigation by an UAV equipped with a laser scanning range finder to generate a rough point cloud in real time to provide an overview of the object on a ground station as well as an obstacle map. The data about the object enables the path planning for the robot fleet. Subsequently, the object will be captured by a RGB camera mounted on the second flying robot for the generation of a dense and accurate 3D point cloud by using of structure from motion techniques. In addition, the detailed image data serves as basis for a visual damage detection on the investigated building. <br><br> This paper focuses on our experience with use of a low-cost light-weight Hokuyo laser scanner onboard an UAV. The hardware components for laser scanner based 3D point cloud acquisition are discussed, problems are demonstrated and analyzed, and a quantitative analysis of the accuracy potential is shown as well as in comparison with structure from motion-tools presented.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


GigaScience ◽  
2021 ◽  
Vol 10 (5) ◽  
Author(s):  
Teng Miao ◽  
Weiliang Wen ◽  
Yinglun Li ◽  
Sheng Wu ◽  
Chao Zhu ◽  
...  

Abstract Background The 3D point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the reliability of the 3D plant reconstruction. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking. Results We propose a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots. We apply our point cloud annotation toolkit for maize shoots, Label3DMaize, to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes ∼4–10 minutes to segment a maize shoot and consumes 10–20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% that of fine segmentation. Conclusion Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation research based on deep learning and is expected to promote automatic point cloud processing of various plants.


Sign in / Sign up

Export Citation Format

Share Document