dictionary learning
Recently Published Documents


TOTAL DOCUMENTS

2697
(FIVE YEARS 863)

H-INDEX

66
(FIVE YEARS 14)

BMC Genomics ◽  
2022 ◽  
Vol 23 (1) ◽  
Author(s):  
Mona Rams ◽  
Tim O.F. Conrad

Abstract Background Pseudotime estimation from dynamic single-cell transcriptomic data enables characterisation and understanding of the underlying processes, for example developmental processes. Various pseudotime estimation methods have been proposed during the last years. Typically, these methods start with a dimension reduction step because the low-dimensional representation is usually easier to analyse. Approaches such as PCA, ICA or t-SNE belong to the most widely used methods for dimension reduction in pseudotime estimation methods. However, these methods usually make assumptions on the derived dimensions, which can result in important dataset properties being missed. In this paper, we suggest a new dictionary learning based approach, dynDLT, for dimension reduction and pseudotime estimation of dynamic transcriptomic data. Dictionary learning is a matrix factorisation approach that does not restrict the dependence of the derived dimensions. To evaluate the performance, we conduct a large simulation study and analyse 8 real-world datasets. Results The simulation studies reveal that firstly, dynDLT preserves the simulated patterns in low-dimension and the pseudotimes can be derived from the low-dimensional representation. Secondly, the results show that dynDLT is suitable for the detection of genes exhibiting the simulated dynamic patterns, thereby facilitating the interpretation of the compressed representation and thus the dynamic processes. For the real-world data analysis, we select datasets with samples that are taken at different time points throughout an experiment. The pseudotimes found by dynDLT have high correlations with the experimental times. We compare the results to other approaches used in pseudotime estimation, or those that are method-wise closely connected to dictionary learning: ICA, NMF, PCA, t-SNE, and UMAP. DynDLT has the best overall performance for the simulated and real-world datasets. Conclusions We introduce dynDLT, a method that is suitable for pseudotime estimation. Its main advantages are: (1) It presents a model-free approach, meaning that it does not restrict the dependence of the derived dimensions; (2) Genes that are relevant in the detected dynamic processes can be identified from the dictionary matrix; (3) By a restriction of the dictionary entries to positive values, the dictionary atoms are highly interpretable.


Photonics ◽  
2022 ◽  
Vol 9 (1) ◽  
pp. 35
Author(s):  
Xuru Li ◽  
Xueqin Sun ◽  
Yanbo Zhang ◽  
Jinxiao Pan ◽  
Ping Chen

Spectral computed tomography (CT) can divide collected photons into multi-energy channels and gain multi-channel projections synchronously by using photon-counting detectors. However, reconstructed images usually contain severe noise due to the limited number of photons in the corresponding energy channel. Tensor dictionary learning (TDL)-based methods have achieved better performance, but usually lose image edge information and details, especially from an under-sampling dataset. To address this problem, this paper proposes a method termed TDL with an enhanced sparsity constraint for spectral CT reconstruction. The proposed algorithm inherits the superiority of TDL by exploring the correlation of spectral CT images. Moreover, the method designs a regularization using the L0-norm of the image gradient to constrain images and the difference between images and a prior image in each energy channel simultaneously, further improving the ability to preserve edge information and subtle image details. The split-Bregman algorithm has been applied to address the proposed objective minimization model. Several numerical simulations and realistic preclinical mice are studied to assess the effectiveness of the proposed algorithm. The results demonstrate that the proposed method improves the quality of spectral CT images in terms of noise elimination, edge preservation, and image detail recovery compared to the several existing better methods.


Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 96
Author(s):  
Shujun Liu ◽  
Ningjie Pu ◽  
Jianxin Cao ◽  
Kui Zhang

Synthetic aperture radar (SAR) images are inherently degraded by speckle noise caused by coherent imaging, which may affect the performance of the subsequent image analysis task. To resolve this problem, this article proposes an integrated SAR image despeckling model based on dictionary learning and multi-weighted sparse coding. First, the dictionary is trained by groups composed of similar image patches, which have the same structural features. An effective orthogonal dictionary with high sparse representation ability is realized by introducing a properly tight frame. Furthermore, the data-fidelity term and regularization terms are constrained by weighting factors. The weighted sparse representation model not only fully utilizes the interblock relevance but also reflects the importance of various structural groups in despeckling processing. The proposed model is implemented with fast and effective solving steps that simultaneously perform orthogonal dictionary learning, weight parameter updating, sparse coding, and image reconstruction. The solving steps are designed using the alternative minimization method. Finally, the speckles are further suppressed by iterative regularization methods. In a comparison study with existing methods, our method demonstrated state-of-the-art performance in suppressing speckle noise and protecting the image texture details.


Author(s):  
Jing Dong ◽  
Liu Yang ◽  
Chang Liu ◽  
Xiaoqing Luo ◽  
Jian Guan

2022 ◽  
pp. 103420
Author(s):  
S. Akhavan ◽  
F. Baghestani ◽  
P. Kazemi ◽  
A. Karami ◽  
H. Soltanian-Zadeh

Author(s):  
Fangyuan Gao ◽  
Xin Deng ◽  
Mai Xu ◽  
Jingyi Xu ◽  
Pier Luigi Dragotti
Keyword(s):  

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Guocan Han ◽  
Weifeng Lin ◽  
Wei Lin

This study was aimed to investigate the diagnostic accuracy of magnetic resonance imaging (MRI) based on deep dictionary learning in TNM (tumor, node, and metastasis) staging of renal cell carcinoma. In this study, 82 patients with renal cancer were selected as the research object. The results were diagnosed by deep dictionary learning MRI, and TNM staging was performed by professional imaging personnel. MRI image will be reconstructed after deep dictionary learning to improve its image recognition ability. The pathological diagnosis will be handed over to the physiological pathology laboratory of the hospital for diagnosis. The staging results were compared with the pathological diagnostic staging results, and the results were analyzed by consistency statistics to evaluate the diagnostic value. The results showed that T staging was significantly consistent with the pathological diagnosis. 2 cases were misdiagnosed, and the accuracy rate was 97.56%. Compared with the pathological diagnosis, N staging had less obvious consistency. 10 cases were misdiagnosed, and the accuracy rate was 87.80%. M staging was significantly consistent with the pathological diagnosis. 4 cases were misdiagnosed. The accuracy rate was 95.12%. After laparotomy, it was found that 37 patients had emboli and 45 patients had no emboli, while 40 patients had emboli and 42 patients had no emboli by MRI. The accuracy rate was 96.34%. The results showed that in the evaluation of TNM staging by MRI imaging based on deep dictionary learning in patients with renal cell carcinoma, the diagnostic results of N staging and M staging were highly consistent with the pathological diagnosis, while the diagnostic results of T staging were slightly less accurate, and the diagnostic consistency was good. The results can provide effective support for the clinical application of MRI imaging based on deep dictionary learning as the clinical diagnosis of TNM staging of renal cell carcinoma.


Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 3021
Author(s):  
Jing Li ◽  
Xiao Wei ◽  
Fengpin Wang ◽  
Jinjia Wang

Inspired by the recent success of the proximal gradient method (PGM) and recent efforts to develop an inertial algorithm, we propose an inertial PGM (IPGM) for convolutional dictionary learning (CDL) by jointly optimizing both an ℓ2-norm data fidelity term and a sparsity term that enforces an ℓ1 penalty. Contrary to other CDL methods, in the proposed approach, the dictionary and needles are updated with an inertial force by the PGM. We obtain a novel derivative formula for the needles and dictionary with respect to the data fidelity term. At the same time, a gradient descent step is designed to add an inertial term. The proximal operation uses the thresholding operation for needles and projects the dictionary to a unit-norm sphere. We prove the convergence property of the proposed IPGM algorithm in a backtracking case. Simulation results show that the proposed IPGM achieves better performance than the PGM and slice-based methods that possess the same structure and are optimized using the alternating-direction method of multipliers (ADMM).


Sign in / Sign up

Export Citation Format

Share Document