Point Cloud Inpainting on Graphs from Non-Local Self-Similarity

Author(s):  
Zeqing Fu ◽  
Wei Hu ◽  
Zongming Guo
Author(s):  
Dingkun Zhu ◽  
Honghua Chen ◽  
Weiming Wang ◽  
Haoran Xie ◽  
Gary Cheng ◽  
...  

2021 ◽  
Vol 19 (2) ◽  
pp. 021102
Author(s):  
Pengwei Wang ◽  
Chenglong Wang ◽  
Cuiping Yu ◽  
Shuai Yue ◽  
Wenlin Gong ◽  
...  

2020 ◽  
Vol 12 (18) ◽  
pp. 2979
Author(s):  
Le Sun ◽  
Chengxun He ◽  
Yuhui Zheng ◽  
Songze Tang

During the process of signal sampling and digital imaging, hyperspectral images (HSI) inevitably suffer from the contamination of mixed noises. The fidelity and efficiency of subsequent applications are considerably reduced along with this degradation. Recently, as a formidable implement for image processing, low-rank regularization has been widely extended to the restoration of HSI. Meanwhile, further exploration of the non-local self-similarity of low-rank images are proven useful in exploiting the spatial redundancy of HSI. Better preservation of spatial-spectral features is achieved under both low-rank and non-local regularizations. However, existing methods generally regularize the original space of HSI, the exploration of the intrinsic properties in subspace, which leads to better denoising performance, is relatively rare. To address these challenges, a joint method of subspace low-rank learning and non-local 4-d transform filtering, named SLRL4D, is put forward for HSI restoration. Technically, the original HSI is projected into a low-dimensional subspace. Then, both spectral and spatial correlations are explored simultaneously by imposing low-rank learning and non-local 4-d transform filtering on the subspace. The alternating direction method of multipliers-based algorithm is designed to solve the formulated convex signal-noise isolation problem. Finally, experiments on multiple datasets are conducted to illustrate the accuracy and efficiency of SLRL4D.


2019 ◽  
Vol 367 ◽  
pp. 1-12 ◽  
Author(s):  
Xiao-Tong Li ◽  
Xi-Le Zhao ◽  
Tai-Xiang Jiang ◽  
Yu-Bang Zheng ◽  
Teng-Yu Ji ◽  
...  

2021 ◽  
Vol 40 (5) ◽  
pp. 1-14
Author(s):  
Gal Metzer ◽  
Rana Hanocka ◽  
Raja Giryes ◽  
Daniel Cohen-Or

We introduce a novel technique for neural point cloud consolidation which learns from only the input point cloud. Unlike other point up-sampling methods which analyze shapes via local patches, in this work, we learn from global subsets. We repeatedly self-sample the input point cloud with global subsets that are used to train a deep neural network. Specifically, we define source and target subsets according to the desired consolidation criteria (e.g., generating sharp points or points in sparse regions). The network learns a mapping from source to target subsets, and implicitly learns to consolidate the point cloud. During inference, the network is fed with random subsets of points from the input, which it displaces to synthesize a consolidated point set. We leverage the inductive bias of neural networks to eliminate noise and outliers, a notoriously difficult problem in point cloud consolidation. The shared weights of the network are optimized over the entire shape, learning non-local statistics and exploiting the recurrence of local-scale geometries. Specifically, the network encodes the distribution of the underlying shape surface within a fixed set of local kernels, which results in the best explanation of the underlying shape surface. We demonstrate the ability to consolidate point sets from a variety of shapes, while eliminating outliers and noise.


Tecnura ◽  
2020 ◽  
Vol 24 (66) ◽  
pp. 62-75
Author(s):  
Edwin Vargas ◽  
Kevin Arias ◽  
Fernando Rojas ◽  
Henry Arguello

Objective: Hyperspectral (HS) imaging systems are commonly used in a diverse range of applications that involve detection and classification tasks. However, the low spatial resolution of hyperspectral images may limit the performance of the involved tasks in such applications. In the last years, fusing the information of an HS image with high spatial resolution multispectral (MS) or panchromatic (PAN) images has been widely studied to enhance the spatial resolution. Image fusion has been formulated as an inverse problem whose solution is an HS image which assumed to be sparse in an analytic or learned dictionary. This work proposes a non-local centralized sparse representation model on a set of learned dictionaries in order to regularize the conventional fusion problem.Methodology: The dictionaries are learned from the estimated abundance data taking advantage of the depth correlation between abundance maps and the non-local self- similarity over the spatial domain. Then, conditionally on these dictionaries, the fusion problem is solved by an alternating iterative numerical algorithm.Results: Experimental results with real data show that the proposed method outperforms the state-of-the-art methods under different quantitative assessments.Conclusions: In this work, we propose a hyperspectral and multispectral image fusion method based on a non-local centralized sparse representation on abundance maps. This model allows us to include the non-local redundancy of abundance maps in the fusion problem using spectral unmixing and improve the performance of the sparsity-based fusion approaches.


Sign in / Sign up

Export Citation Format

Share Document