scholarly journals TNNG: Total Nuclear Norms of Gradients for Hyperspectral Image Prior

2021 ◽  
Vol 13 (4) ◽  
pp. 819
Author(s):  
Ryota Yuzuriha ◽  
Ryuji Kurihara ◽  
Ryo Matsuoka ◽  
Masahiro Okuda

We introduce a novel regularization function for hyperspectral image (HSI), which is based on the nuclear norms of gradient images. Unlike conventional low-rank priors, we achieve a gradient-based low-rank approximation by minimizing the sum of nuclear norms associated with rotated planes in the gradient of a HSI. Our method explicitly and simultaneously exploits the correlation in the spectral domain as well as the spatial domain. Our method exploits the low-rankness of a global region to enhance the dimensionality reduction by the prior. Since our method considers the low-rankness in the gradient domain, it more sensitively detects anomalous variations. Our method achieves high-fidelity image recovery using a single regularization function without the explicit use of any sparsity-inducing priors such as ℓ0, ℓ1 and total variation (TV) norms. We also apply this regularization to a gradient-based robust principal component analysis and show its superiority in HSI decomposition. To demonstrate, the proposed regularization is validated on a variety of HSI reconstruction/decomposition problems with performance comparisons to state-of-the-art methods its superior performance.

2018 ◽  
Vol 10 (12) ◽  
pp. 1956 ◽  
Author(s):  
Le Sun ◽  
Tianming Zhan ◽  
Zebin Wu ◽  
Liang Xiao ◽  
Byeungwoo Jeon

Exploration of multiple priors on observed signals has been demonstrated to be one of the effective ways for recovering underlying signals. In this paper, a new spectral difference-induced total variation and low-rank approximation (termed SDTVLA) method is proposed for hyperspectral mixed denoising. Spectral difference transform, which projects data into spectral difference space (SDS), has been proven to be powerful at changing the structures of noises (especially for sparse noise with a specific pattern, e.g., stripes or dead lines present at the same position in a series of bands) in an original hyperspectral image (HSI), thus allowing low-rank techniques to get rid of mixed noises more efficiently without treating them as low-rank features. In addition, because the neighboring pixels are highly correlated and the spectra of homogeneous objects in a hyperspectral scene are always in the same low-dimensional manifold, we are inspired to combine total variation and the nuclear norm to simultaneously exploit the local piecewise smoothness and global low rankness in SDS for mixed noise reduction of HSI. Finally, the alternating direction methods of multipliers (ADMM) is employed to effectively solve the SDTVLA model. Extensive experiments on three simulated and two real HSI datasets demonstrate that, in terms of quantitative metrics (i.e., the mean peak signal-to-noise ratio (MPSNR), the mean structural similarity index (MSSIM) and the mean spectral angle (MSA)), the proposed SDTVLA method is, on average, 1.5 dB higher MPSNR values than the competitive methods as well as performing better in terms of visual effect.


2020 ◽  
Vol 12 (14) ◽  
pp. 2264
Author(s):  
Hongyi Liu ◽  
Hanyang Li ◽  
Zebin Wu ◽  
Zhihui Wei

Low-rank tensors have received more attention in hyperspectral image (HSI) recovery. Minimizing the tensor nuclear norm, as a low-rank approximation method, often leads to modeling bias. To achieve an unbiased approximation and improve the robustness, this paper develops a non-convex relaxation approach for low-rank tensor approximation. Firstly, a non-convex approximation of tensor nuclear norm (NCTNN) is introduced to the low-rank tensor completion. Secondly, a non-convex tensor robust principal component analysis (NCTRPCA) method is proposed, which aims at exactly recovering a low-rank tensor corrupted by mixed-noise. The two proposed models are solved efficiently by the alternating direction method of multipliers (ADMM). Three HSI datasets are employed to exhibit the superiority of the proposed model over the low rank penalization method in terms of accuracy and robustness.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
E. Zhu ◽  
M. Xu ◽  
D. Pi

Noise exhibits low rank or no sparsity in the low-rank matrix recovery, and the nuclear norm is not an accurate rank approximation of low-rank matrix. In the present study, to solve the mentioned problem, a novel nonconvex approximation function of the low-rank matrix was proposed. Subsequently, based on the nonconvex rank approximation function, a novel model of robust principal component analysis was proposed. Such model was solved with the alternating direction method, and its convergence was verified theoretically. Subsequently, the background separation experiments were performed on the Wallflower and SBMnet datasets. Furthermore, the effectiveness of the novel model was verified by numerical experiments.


2019 ◽  
Vol 12 (S10) ◽  
Author(s):  
Junning Gao ◽  
Lizhi Liu ◽  
Shuwei Yao ◽  
Xiaodi Huang ◽  
Hiroshi Mamitsuka ◽  
...  

Abstract Background As a standardized vocabulary of phenotypic abnormalities associated with human diseases, the Human Phenotype Ontology (HPO) has been widely used by researchers to annotate phenotypes of genes/proteins. For saving the cost and time spent on experiments, many computational approaches have been proposed. They are able to alleviate the problem to some extent, but their performances are still far from satisfactory. Method For inferring large-scale protein-phenotype associations, we propose HPOAnnotator that incorporates multiple Protein-Protein Interaction (PPI) information and the hierarchical structure of HPO. Specifically, we use a dual graph to regularize Non-negative Matrix Factorization (NMF) in a way that the information from different sources can be seamlessly integrated. In essence, HPOAnnotator solves the sparsity problem of a protein-phenotype association matrix by using a low-rank approximation. Results By combining the hierarchical structure of HPO and co-annotations of proteins, our model can well capture the HPO semantic similarities. Moreover, graph Laplacian regularizations are imposed in the latent space so as to utilize multiple PPI networks. The performance of HPOAnnotator has been validated under cross-validation and independent test. Experimental results have shown that HPOAnnotator outperforms the competing methods significantly. Conclusions Through extensive comparisons with the state-of-the-art methods, we conclude that the proposed HPOAnnotator is able to achieve the superior performance as a result of using a low-rank approximation with a graph regularization. It is promising in that our approach can be considered as a starting point to study more efficient matrix factorization-based algorithms.


Sign in / Sign up

Export Citation Format

Share Document