scholarly journals Compression and Recovery of 3D Broad-Leaved Tree Point Clouds Based on Compressed Sensing

Forests ◽  
2020 ◽  
Vol 11 (3) ◽  
pp. 257
Author(s):  
Renjie Xu ◽  
Ting Yun ◽  
Lin Cao ◽  
Yunfei Liu

The terrestrial laser scanner (TLS) has been widely used in forest inventories. However, with increasing precision of TLS, storing and transmitting tree point clouds become more challenging. In this paper, a novel compressed sensing (CS) scheme for broad-leaved tree point clouds is proposed by analyzing and comparing different sparse bases, observation matrices, and reconstruction algorithms. Our scheme starts by eliminating outliers and simplifying point clouds with statistical filtering and voxel filtering. The scheme then applies Haar sparse basis to thin the coordinate data based on the characteristics of the broad-leaved tree point clouds. An observation procedure down-samples the point clouds with the partial Fourier matrix. The regularized orthogonal matching pursuit algorithm (ROMP) finally reconstructs the original point clouds. The experimental results illustrate that the proposed scheme can preserve morphological attributes of the broad-leaved tree within a range of relative error: 0.0010%–3.3937%, and robustly extend to plot-level within a range of mean square error (MSE): 0.0063–0.2245.

2014 ◽  
Vol 635-637 ◽  
pp. 971-977 ◽  
Author(s):  
Rui Yuan ◽  
Jun Yue ◽  
Hong Xiu Gao

The DOA estimation by the model of four elements in the square array has studied based on the theory of compressed sensing. Using matching pursuit algorithm and orthogonality matching pursuit algorithm, the computer simulation was presented. The results show the method of DOA estimation by compressed sensing theory is simple, practical and low computational complexity.


Forests ◽  
2019 ◽  
Vol 10 (7) ◽  
pp. 599 ◽  
Author(s):  
Ravaglia ◽  
Fournier ◽  
Bac ◽  
Véga ◽  
Côté ◽  
...  

Terrestrial laser scanners provide accurate and detailed point clouds of forest plots, which can be used as an alternative to destructive measurements during forest inventories. Various specialized algorithms have been developed to provide automatic and objective estimates of forest attributes from point clouds. The STEP (Snakes for Tuboid Extraction from Point cloud) algorithm was developed to estimate both stem diameter at breast height and stem diameters along the bole length. Here, we evaluate the accuracy of this algorithm and compare its performance with two other state-of-the-art algorithms that were designed for the same purpose (i.e., the CompuTree and SimpleTree algorithms). We tested each algorithm against point clouds that incorporated various degrees of noise and occlusion. We applied these algorithms to three contrasting test sites: (1) simulated scenes of coniferous stands in Newfoundland (Canada), (2) test sites of deciduous stands in Phalsbourg (France), and (3) coniferous plantations in Quebec, Canada. In most cases, the STEP algorithm predicted diameter at breast height with higher R2 and lower RMSE than the other two algorithms. The STEP algorithm also achieved greater accuracy when estimating stem diameter in occluded and noisy point clouds, with mean errors in the range of 1.1 cm to 2.28 cm. The CompuTree and SimpleTree algorithms respectively produced errors in the range of 2.62 cm to 6.1 cm and 1.03 cm to 3.34 cm, respectively. Unlike CompuTree or SimpleTree, the STEP algorithm was not able to estimate trunk diameter in the uppermost portions of the trees. Our results show that the STEP algorithm is more adapted to extract DBH and stem diameter automatically from occluded and noisy point clouds. Our study also highlights that SimpleTree and CompuTree require data filtering and results corrections. Conversely, none of these procedures were applied for the implementation of the STEP algorithm.


2015 ◽  
Vol 9 (1) ◽  
pp. 74-81
Author(s):  
Wang Feng ◽  
Chen Feng-wei ◽  
Wang Jia

Owing to the characteristics such as high resolution, large capacity, and great quantity, thus far, how to efficient store and transmit satellite images is still an unsolved technical problem. Satellite image Compressed sensing (CS) theory breaks through the limitations of traditional Nyquist sampling theory, it is based on signal sparsity, randomness of measurement matrix and nonlinear optimization algorithms to complete the sampling compression and restoring reconstruction of signal. This article firstly discusses the study of satellite image compression based on compression sensing theory. It then optimizes the widely used orthogonal matching pursuit algorithm in order to make it fits for satellite image processing. Finally, a simulation experiment for the optimized algorithm is carried out to prove this approach is able to provide high compression ratio and low signal to noise ratio, and it is worthy of further study.


2013 ◽  
Vol 756-759 ◽  
pp. 3785-3788
Author(s):  
Sai Qi Shang ◽  
Min Gang Wang ◽  
Wei Li ◽  
Yao Yang

Expensiveness and lack of N-pixels sensor affect the application of terahertz imaging. New compressed sensing theory recently achieved a major breakthrough in the field of signal codec, making it possible to recover the original image by using the measured values, which have much smaller number than the pixels in the image. In this paper, by comparing the measurement matrices based on different reconstruction algorithms, such as Orthogonal Matching Pursuit, Compressive Sampling Matching Pursuit and Minimum L_1 Norm algorithms, we proposed a terahertz imaging method based on single detector of randomly moving measurement matrices, designed the mobile random templates and an automatically template changing mechanism, constructed a single detector imaging system, and completed the single terahertz detector imaging experiments.


Sign in / Sign up

Export Citation Format

Share Document