scholarly journals An Image Decomposition Fusion Method for Medical Images

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Lihong Chang ◽  
Wan Ma ◽  
Yu Jin ◽  
Li Xu

A fusion method based on the cartoon+texture decomposition method and convolution sparse representation theory is proposed for medical images. It can be divided into three steps: firstly, the cartoon and texture parts are obtained using the improved cartoon-texture decomposition method. Secondly, the fusion rules of energy protection and feature extraction are used in the cartoon part, while the fusion method of convolution sparse representation is used in the texture part. Finally, the fused image is obtained using superimposing the fused cartoon and texture parts. Experiments show that the proposed algorithm is effective.

2012 ◽  
Vol 424-425 ◽  
pp. 223-226 ◽  
Author(s):  
Zheng Hong Cao ◽  
Yu Dong Guan ◽  
Peng Wang ◽  
Chun Li Ti

This paper focuses on the fusion method of visible image and infrared image, making in-depth discussion on the existing algorithms and proposes a novel method on the fusion rules. The image is firstly decomposed into low-frequency and high-frequency coefficients by NSCT and the characteristics of visible image and infrared image are then taken into account to finish the fusion. Finally, the quality of the fused image by different algorithms is compared with several existing criterions. MATLAB is employed to finish the simulation and the results will demonstrate this algorithm can improve the quality of the fused image effectively and the features in the image won’t be missing


2017 ◽  
Vol 56 (28) ◽  
pp. 7969 ◽  
Author(s):  
Lihong Chang ◽  
Xiangchu Feng ◽  
Rui Zhang ◽  
Hua Huang ◽  
Weiwei Wang ◽  
...  

2018 ◽  
Vol 432 ◽  
pp. 516-529 ◽  
Author(s):  
Zhiqin Zhu ◽  
Hongpeng Yin ◽  
Yi Chai ◽  
Yanxia Li ◽  
Guanqiu Qi

Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 303 ◽  
Author(s):  
Xiaole Ma ◽  
Shaohai Hu ◽  
Shuaiqi Liu ◽  
Jing Fang ◽  
Shuwen Xu

In this paper, a remote sensing image fusion method is presented since sparse representation (SR) has been widely used in image processing, especially for image fusion. Firstly, we used source images to learn the adaptive dictionary, and sparse coefficients were obtained by sparsely coding the source images with the adaptive dictionary. Then, with the help of improved hyperbolic tangent function (tanh) and l 0 − max , we fused these sparse coefficients together. The initial fused image can be obtained by the image fusion method based on SR. To take full advantage of the spatial information of the source images, the fused image based on the spatial domain (SF) was obtained at the same time. Lastly, the final fused image could be reconstructed by guided filtering of the fused image based on SR and SF. Experimental results show that the proposed method outperforms some state-of-the-art methods on visual and quantitative evaluations.


Author(s):  
Liu Xian-Hong ◽  
Chen Zhi-Bin

Background: A multi-scale multidirectional image fusion method is proposed, which introduces the Nonsubsampled Directional Filter Bank (NSDFB) into the multi-scale edge-preserving decomposition based on the fast guided filter. Methods: The proposed method has the advantages of preserving edges and extracting directional information simultaneously. In order to get better-fused sub-bands coefficients, a Convolutional Sparse Representation (CSR) based approximation sub-bands fusion rule is introduced and a Pulse Coupled Neural Network (PCNN) based detail sub-bands fusion strategy with New Sum of Modified Laplacian (NSML) to be the external input is also presented simultaneously. Results: Experimental results have demonstrated the superiority of the proposed method over conventional methods in terms of visual effects and objective evaluations. Conclusion: In this paper, combining fast guided filter and nonsubsampled directional filter bank, a multi-scale directional edge-preserving filter image fusion method is proposed. The proposed method has the features of edge-preserving and extracting directional information.


2021 ◽  
Vol 13 (6) ◽  
pp. 1143
Author(s):  
Yinghui Quan ◽  
Yingping Tong ◽  
Wei Feng ◽  
Gabriel Dauphin ◽  
Wenjiang Huang ◽  
...  

The fusion of the hyperspectral image (HSI) and the light detecting and ranging (LiDAR) data has a wide range of applications. This paper proposes a novel feature fusion method for urban area classification, namely the relative total variation structure analysis (RTVSA), to combine various features derived from HSI and LiDAR data. In the feature extraction stage, a variety of high-performance methods including the extended multi-attribute profile, Gabor filter, and local binary pattern are used to extract the features of the input data. The relative total variation is then applied to remove useless texture information of the processed data. Finally, nonparametric weighted feature extraction is adopted to reduce the dimensions. Random forest and convolutional neural networks are utilized to evaluate the fusion images. Experiments conducted on two urban Houston University datasets (including Houston 2012 and the training portion of Houston 2017) demonstrate that the proposed method can extract the structural correlation from heterogeneous data, withstand a noise well, and improve the land cover classification accuracy.


2021 ◽  
pp. 102535
Author(s):  
Rongge Zhao ◽  
Yi Liu ◽  
Zhe Zhao ◽  
Xia Zhao ◽  
Pengcheng Zhang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document