Multifocus image fusion using multiscale transform and convolutional sparse representation

Author(s):  
Chengfang Zhang

Multifocus image fusion can obtain an image with all objects in focus, which is beneficial for understanding the target scene. Multiscale transform (MST) and sparse representation (SR) have been widely used in multifocus image fusion. However, the contrast of the fused image is lost after multiscale reconstruction, and fine details tend to be smoothed for SR-based fusion. In this paper, we propose a fusion method based on MST and convolutional sparse representation (CSR) to address the inherent defects of both the MST- and SR-based fusion methods. MST is first performed on each source image to obtain the low-frequency components and detailed directional components. Then, CSR is applied in the low-pass fusion, while the high-pass bands are fused using the popular “max-absolute” rule as the activity level measurement. The fused image is finally obtained by performing inverse MST on the fused coefficients. The experimental results on multifocus images show that the proposed algorithm exhibits state-of-the-art performance in terms of definition.

2021 ◽  
Author(s):  
Anuyogam Venkataraman

With the increasing utilization of X-ray Computed Tomography (CT) in medical diagnosis, obtaining higher quality image with lower exposure to radiation is a highly challenging task in image processing. Sparse representation based image fusion is one of the sought after fusion techniques among the current researchers. A novel image fusion algorithm based on focused vector detection is proposed in this thesis. Firstly, the initial fused vector is acquired by combining common and innovative sparse components of multi-dosage ensemble using Joint Sparse PCA fusion method utilizing an overcomplete dictionary trained using high dose images of the same region of interest from different patients. And then, the strongly focused vector is obtained by determining the pixels of low dose and medium dose vectors which have high similarity with the pixels of the initial fused vector using certain quantitative metrics. Final fused image is obtained by denoising and simultaneously integrating the strongly focused vector, initial fused vector and source image vectors in joint sparse domain thereby preserving the edges and other critical information needed for diagnosis. This thesis demonstrates the effectiveness of the proposed algorithms when experimented on different images and the qualitative and quantitative results are compared with some of the widely used image fusion methods.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4556 ◽  
Author(s):  
Yaochen Liu ◽  
Lili Dong ◽  
Yuanyuan Ji ◽  
Wenhai Xu

In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images.


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Jingming Xia ◽  
Yiming Chen ◽  
Aiyue Chen ◽  
Yicai Chen

The clinical assistant diagnosis has a high requirement for the visual effect of medical images. However, the low frequency subband coefficients obtained by the NSCT decomposition are not sparse, which is not conducive to maintaining the details of the source image. To solve these problems, a medical image fusion algorithm combined with sparse representation and pulse coupling neural network is proposed. First, the source image is decomposed into low and high frequency subband coefficients by NSCT transform. Secondly, the K singular value decomposition (K-SVD) method is used to train the low frequency subband coefficients to get the overcomplete dictionary D, and the orthogonal matching pursuit (OMP) algorithm is used to sparse the low frequency subband coefficients to complete the fusion of the low frequency subband sparse coefficients. Then, the pulse coupling neural network (PCNN) is excited by the spatial frequency of the high frequency subband coefficients, and the fusion coefficients of the high frequency subband coefficients are selected according to the number of ignition times. Finally, the fusion medical image is reconstructed by NSCT inverter. The experimental results and analysis show that the algorithm of gray and color image fusion is about 34% and 10% higher than the contrast algorithm in the edge information transfer factor QAB/F index, and the performance of the fusion result is better than the existing algorithm.


2021 ◽  
Author(s):  
Anuyogam Venkataraman

With the increasing utilization of X-ray Computed Tomography (CT) in medical diagnosis, obtaining higher quality image with lower exposure to radiation is a highly challenging task in image processing. Sparse representation based image fusion is one of the sought after fusion techniques among the current researchers. A novel image fusion algorithm based on focused vector detection is proposed in this thesis. Firstly, the initial fused vector is acquired by combining common and innovative sparse components of multi-dosage ensemble using Joint Sparse PCA fusion method utilizing an overcomplete dictionary trained using high dose images of the same region of interest from different patients. And then, the strongly focused vector is obtained by determining the pixels of low dose and medium dose vectors which have high similarity with the pixels of the initial fused vector using certain quantitative metrics. Final fused image is obtained by denoising and simultaneously integrating the strongly focused vector, initial fused vector and source image vectors in joint sparse domain thereby preserving the edges and other critical information needed for diagnosis. This thesis demonstrates the effectiveness of the proposed algorithms when experimented on different images and the qualitative and quantitative results are compared with some of the widely used image fusion methods.


2012 ◽  
Vol 546-547 ◽  
pp. 806-810 ◽  
Author(s):  
Xu Zhang ◽  
Yun Hui Yan ◽  
Wen Hui Chen ◽  
Jun Jun Chen

To solve the problem of the pseudo-Gibbs phenomena around singularities when we implement image fusion with images of strip surface detects obtained from different angles, a novel image fusion method based on Bandelet-PCNN(Pulse coupled neural networks) is proposed. Low-pass sub-band coefficient of source image by Bandelet is inputted into PCNN. And the coefficient is selected by ignition frequency by the neuron iteration. At last the fused image can be got through inverse Bandelet using the coefficient and Geometric flow parameters. Experimental results demonstrate that for the scrip surface detects of scratches, abrasions and pit, fused image effectively combines defect information of multiple image sources. Contrast to the classical wavelet transform and Bandelet transform the method reserves more detailed and comprehensive detect information. Consequently the method proposed in this paper is more effective.


Author(s):  
Zhiguang Yang ◽  
Youping Chen ◽  
Zhuliang Le ◽  
Yong Ma

Abstract In this paper, a novel multi-exposure image fusion method based on generative adversarial networks (termed as GANFuse) is presented. Conventional multi-exposure image fusion methods improve their fusion performance by designing sophisticated activity-level measurement and fusion rules. However, these methods have a limited success in complex fusion tasks. Inspired by the recent FusionGAN which firstly utilizes generative adversarial networks (GAN) to fuse infrared and visible images and achieves promising performance, we improve its architecture and customize it in the task of extreme exposure image fusion. To be specific, in order to keep content of extreme exposure image pairs in the fused image, we increase the number of discriminators differentiating between fused image and extreme exposure image pairs. While, a generator network is trained to generate fused images. Through the adversarial relationship between generator and discriminators, the fused image will contain more information from extreme exposure image pairs. Thus, this relationship can realize better performance of fusion. In addition, the method we proposed is an end-to-end and unsupervised learning model, which can avoid designing hand-crafted features and does not require a number of ground truth images for training. We conduct qualitative and quantitative experiments on a public dataset, and the experimental result shows that the proposed model demonstrates better fusion ability than existing multi-exposure image fusion methods in both visual effect and evaluation metrics.


2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Peng Geng ◽  
Shuaiqi Liu ◽  
Shanna Zhuang

Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.


2006 ◽  
Author(s):  
Dan Mueller

Image fusion provides a mechanism to combine multiple images into a single representation to aid human visual perception and image processing tasks. Such algorithms endeavour to create a fused image containing the salient information from each source image, without introducing artefacts or inconsistencies. Image fusion is applicable for numerous fields including: defence systems, remote sensing and geoscience, robotics and industrial engineering, and medical imaging. In the medical imaging domain, image fusion may aid diagnosis and surgical planning tasks requiring the segmentation, feature extraction, and/or visualisation of multi-modal datasets.This paper discusses the implementation of an image fusion toolkit built upon the Insight Toolkit (ITK). Based on an existing architecture, the proposed framework (GIFT) offers a ‘plug-and-play’ environment for the construction of n-D multi-scale image fusion methods. We give a brief overview of the toolkit design and demonstrate how to construct image fusion algorithms from low-level components (such as multi-scale methods and feature generators). A number of worked examples for medical applications are presented in Appendix A, including quadrature mirror filter discrete wavelet transform (QMF DWT) image fusion.


2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Ling Tan ◽  
Xin Yu

Clinical diagnosis has high requirements for the visual effect of medical images. To obtain rich detail features and clear edges for fusion medical images, an image fusion algorithm FFST-SR-PCNN based on fast finite shearlet transform (FFST) and sparse representation is proposed, aiming at the problem of poor clarity of edge details that is conducive to maintaining the details of source image in current algorithms. Firstly, the source image is decomposed into low-frequency coefficients and high-frequency coefficients by FFST. Secondly, the K-SVD method is used to train the low-frequency coefficients to obtain the overcomplete dictionary D, and then the OMP algorithm sparsely encodes the low-frequency coefficients to complete the fusion of the low-frequency coefficients. Then, a high-frequency coefficient is applied to excite a pulse-coupled neural network, and the fusion coefficient of the high-frequency coefficient is selected according to the number of ignitions. Finally, the fused low-frequency coefficient and high-frequency coefficient are reconstructed into the fused medical image by FFST inverse transform. The experimental results show that the image fusion result of the proposed algorithm is about 35% higher than the comparison algorithms for the edge information transfer factor QAB/F index and has achieved good results in both subjective visual effects and objective evaluation indicators.


2021 ◽  
Author(s):  
Anuyogam Venkataraman

With the increasing utilization of X-ray Computed Tomography (CT) in medical diagnosis, obtaining higher quality image with lower exposure to radiation is a highly challenging task in image processing. Sparse representation based image fusion is one of the sought after fusion techniques among the current researchers. A novel image fusion algorithm based on focused vector detection is proposed in this thesis. Firstly, the initial fused vector is acquired by combining common and innovative sparse components of multi-dosage ensemble using Joint Sparse PCA fusion method utilizing an overcomplete dictionary trained using high dose images of the same region of interest from different patients. And then, the strongly focused vector is obtained by determining the pixels of low dose and medium dose vectors which have high similarity with the pixels of the initial fused vector using certain quantitative metrics. Final fused image is obtained by denoising and simultaneously integrating the strongly focused vector, initial fused vector and source image vectors in joint sparse domain thereby preserving the edges and other critical information needed for diagnosis. This thesis demonstrates the effectiveness of the proposed algorithms when experimented on different images and the qualitative and quantitative results are compared with some of the widely used image fusion methods.


Sign in / Sign up

Export Citation Format

Share Document