scholarly journals A New Robust Adaptive Fusion Method for Double-Modality Medical Image PET/CT

2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Tao Zhou ◽  
Huiling Lu ◽  
Fuyuan Hu ◽  
Hongbin Shi ◽  
Shi Qiu ◽  
...  

A new robust adaptive fusion method for double-modality medical image PET/CT is proposed according to the Piella framework. The algorithm consists of the following three steps. Firstly, the registered PET and CT images are decomposed using the nonsubsampled contourlet transform (NSCT). Secondly, in order to highlight the lesions of the low-frequency image, low-frequency components are fused by pulse-coupled neural network (PCNN) that has a higher sensitivity to featured area with low intensities. With regard to high-frequency subbands, the Gauss random matrix is used for compression measurements, histogram distance between the every two corresponding subblocks of high coefficient is employed as match measure, and regional energy is used as activity measure. The fusion factor d is then calculated by using the match measure and the activity measure. The high-frequency measurement value is fused according to the fusion factor, and high-frequency fusion image is reconstructed by using the orthogonal matching pursuit algorithm of the high-frequency measurement after fusion. Thirdly, the final image is acquired through the NSCT inverse transformation of the low-frequency fusion image and the reconstructed high-frequency fusion image. To validate the proposed algorithm, four comparative experiments were performed: comparative experiment with other image fusion algorithms, comparison of different activity measures, different match measures, and PET/CT fusion results of lung cancer (20 groups). The experimental results showed that the proposed algorithm could better retain and show the lesion information, and is superior to other fusion algorithms based on both the subjective and objective evaluations.

2015 ◽  
Vol 719-720 ◽  
pp. 988-993
Author(s):  
Hui Zhu Ma ◽  
Qi Gui Nie

The traditional fusion rules of multi-focus image are largely centered on the fusion rule of high frequency coefficients, and those rules are all based on single pixel. Which leads to serious ringing effect, and reduces the visual effect of fusion image. The energy of an image is concentrated in the low frequency part after Wavelet Transform, and multi-focus image has the characteristic that the vast majority of adjacent pixels are either the clear area, or the blur area. Based on the above analysis, a new fusion method to multi-focus image is presented in this paper. The simulation results show that the proposed method is more feasible than common methods in processing multi-focus image.


2019 ◽  
Vol 14 (7) ◽  
pp. 658-666
Author(s):  
Kai-jian Xia ◽  
Jian-qiang Wang ◽  
Jian Cai

Background: Lung cancer is one of the common malignant tumors. The successful diagnosis of lung cancer depends on the accuracy of the image obtained from medical imaging modalities. Objective: The fusion of CT and PET is combining the complimentary and redundant information both images and can increase the ease of perception. Since the existing fusion method sare not perfect enough, and the fusion effect remains to be improved, the paper proposes a novel method called adaptive PET/CT fusion for lung cancer in Piella framework. Methods: This algorithm firstly adopted the DTCWT to decompose the PET and CT images into different components, respectively. In accordance with the characteristics of low-frequency and high-frequency components and the features of PET and CT image, 5 membership functions are used as a combination method so as to determine the fusion weight for low-frequency components. In order to fuse different high-frequency components, we select the energy difference of decomposition coefficients as the match measure, and the local energy as the activity measure; in addition, the decision factor is also determined for the high-frequency components. Results: The proposed method is compared with some of the pixel-level spatial domain image fusion algorithms. The experimental results show that our proposed algorithm is feasible and effective. Conclusion: Our proposed algorithm can better retain and protrude the lesions edge information and the texture information of lesions in the image fusion.


2021 ◽  
Vol 12 (4) ◽  
pp. 78-97
Author(s):  
Hassiba Talbi ◽  
Mohamed-Khireddine Kholladi

In this paper, the authors propose an algorithm of hybrid particle swarm with differential evolution (DE) operator, termed DEPSO, with the help of a multi-resolution transform named dual tree complex wavelet transform (DTCWT) to solve the problem of multimodal medical image fusion. This hybridizing approach aims to combine algorithms in a judicious manner, where the resulting algorithm will contain the positive features of these different algorithms. This new algorithm decomposes the source images into high-frequency and low-frequency coefficients by the DTCWT, then adopts the absolute maximum method to fuse high-frequency coefficients; the low-frequency coefficients are fused by a weighted average method while the weights are estimated and enhanced by an optimization method to gain optimal results. The authors demonstrate by the experiments that this algorithm, besides its simplicity, provides a robust and efficient way to fuse multimodal medical images compared to existing wavelet transform-based image fusion algorithms.


2018 ◽  
Vol 7 (2.31) ◽  
pp. 165
Author(s):  
M Shyamala Devi ◽  
P Balamurugan

Image processing technology requires moreover the full image or the part of image which is to be processed from the user’s point of view like the radius of object etc. The main purpose of fusion is to diminish dissimilar error between the fused image and the input images. With respect to the medical diagnosis, the edges and outlines of the concerned objects is more important than extra information. So preserving the edge features of the image is worth for investigating the image fusion. The image with higher contrast contains more edge-like features. Here we propose a new medical image fusion scheme namely Local Energy Match NSCT based on discrete contourlet transformation, which is constructive to give the details of curve edges. It is used to progress the edge information of fused image by dropping the distortion. This transformation lead to crumbling of multimodal image addicted to finer and coarser details and finest details will be decayed into unusual resolution in dissimilar orientation. The input multimodal images namely CT and MRI images are first transformed by Non Sub sampled Contourlet Transformation (NSCT) which decomposes the image into low frequency and high frequency elements. In our system, the Low frequency coefficient of the image is fused by image averaging and Gabor filter bank algorithm. The processed High frequency coefficients of the image are fused by image averaging and gradient based fusion algorithm. Then the fused image is obtained by inverse NSCT with local energy match based coefficients. To evaluate the image fusion accuracy, Peak Signal to Noise Ratio (PSNR), Root Mean Square Error (RMSE) and Correlation Coefficient parameters are used in this work .


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Zhaisheng Ding ◽  
Dongming Zhou ◽  
Rencan Nie ◽  
Ruichao Hou ◽  
Yanyu Liu

Computed tomography (CT) images show structural features, while magnetic resonance imaging (MRI) images represent brain tissue anatomy but do not contain any functional information. How to effectively combine the images of the two modes has become a research challenge. In this paper, a new framework for medical image fusion is proposed which combines convolutional neural networks (CNNs) and non-subsampled shearlet transform (NSST) to simultaneously cover the advantages of them both. This method effectively retains the functional information of the CT image and reduces the loss of brain structure information and spatial distortion of the MRI image. In our fusion framework, the initial weights integrate the pixel activity information from two source images that is generated by a dual-branch convolutional network and is decomposed by NSST. Firstly, the NSST is performed on the source images and the initial weights to obtain their low-frequency and high-frequency coefficients. Then, the first component of the low-frequency coefficients is fused by a novel fusion strategy, which simultaneously copes with two key issues in the fusion processing which are named energy conservation and detail extraction. The second component of the low-frequency coefficients is fused by the strategy that is designed according to the spatial frequency of the weight map. Moreover, the high-frequency coefficients are fused by the high-frequency components of the initial weight. Finally, the final image is reconstructed by the inverse NSST. The effectiveness of the proposed method is verified using pairs of multimodality images, and the sufficient experiments indicate that our method performs well especially for medical image fusion.


2010 ◽  
Vol 07 (02) ◽  
pp. 99-107 ◽  
Author(s):  
NEMIR AL-AZZAWI ◽  
WAN AHMED K. WAN ABDULLAH

Medical image fusion has been used to derive useful information from multimodality medical image data. This paper presents a dual-tree complex contourlet transform (DT-CCT) based approach for the fusion of magnetic resonance image (MRI) and computed tomography (CT) image. The objective of the fusion of an MRI and a CT image of the same organ is to obtain a single image containing as much information as possible about that organ for diagnosis. The limitation of directional information of dual-tree complex wavelet (DT-CWT) is rectified in DT-CCT by incorporating directional filter banks (DFB) into the DT-CWT. To improve the fused image quality, we propose a new method for fusion rules based on the principle component analysis (PCA) which depend on frequency component of DT-CCT coefficients (contourlet domain). For low frequency coefficients, PCA method is adopted and for high frequency coefficients, the salient features are picked up based on local energy. The final fusion image is obtained by directly applying inverse dual tree complex contourlet transform (IDT-CCT) to the fused low and high frequency coefficients. The DT-CCT produces images with improved contours and textures, while the property of shift invariance is retained. The experimental results showed that the proposed method produces fixed image with extensive features on multimodality.


2014 ◽  
Vol 687-691 ◽  
pp. 3656-3661
Author(s):  
Min Fen Shen ◽  
Zhi Fei Su ◽  
Jin Yao Yang ◽  
Li Sha Sun

Because of the limit of the optical lens’s depth, the objects of different distance usually cannot be at the same focus in the same picture, but multi-focus image fusion can obtain fusion image with all goals clear, improving the utilization rate of the image information ,which is helpful to further computer processing. According to the imaging characteristics of multi-focus image, a multi-focus image fusion algorithm based on redundant wavelet transform is proposed in this paper. For different frequency domain of redundant wavelet decomposition, the selection principle of high-frequency coefficients and low-frequency coefficients is respectively discussed .The fusion rule is that,the selection of low frequency coefficient is based on the local area energy, and the high frequency coefficient is based on local variance combining with matching threshold. As can be seen from the simulation results, the method given in the paper is a good way to retain more useful information from the source image , getting a fusion image with all goals clear.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1362
Author(s):  
Hui Wan ◽  
Xianlun Tang ◽  
Zhiqin Zhu ◽  
Weisheng Li

Multi-focus image fusion is an important method used to combine the focused parts from source multi-focus images into a single full-focus image. Currently, to address the problem of multi-focus image fusion, the key is on how to accurately detect the focus regions, especially when the source images captured by cameras produce anisotropic blur and unregistration. This paper proposes a new multi-focus image fusion method based on the multi-scale decomposition of complementary information. Firstly, this method uses two groups of large-scale and small-scale decomposition schemes that are structurally complementary, to perform two-scale double-layer singular value decomposition of the image separately and obtain low-frequency and high-frequency components. Then, the low-frequency components are fused by a rule that integrates image local energy with edge energy. The high-frequency components are fused by the parameter-adaptive pulse-coupled neural network model (PA-PCNN), and according to the feature information contained in each decomposition layer of the high-frequency components, different detailed features are selected as the external stimulus input of the PA-PCNN. Finally, according to the two-scale decomposition of the source image that is structure complementary, and the fusion of high and low frequency components, two initial decision maps with complementary information are obtained. By refining the initial decision graph, the final fusion decision map is obtained to complete the image fusion. In addition, the proposed method is compared with 10 state-of-the-art approaches to verify its effectiveness. The experimental results show that the proposed method can more accurately distinguish the focused and non-focused areas in the case of image pre-registration and unregistration, and the subjective and objective evaluation indicators are slightly better than those of the existing methods.


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Jingming Xia ◽  
Yiming Chen ◽  
Aiyue Chen ◽  
Yicai Chen

The clinical assistant diagnosis has a high requirement for the visual effect of medical images. However, the low frequency subband coefficients obtained by the NSCT decomposition are not sparse, which is not conducive to maintaining the details of the source image. To solve these problems, a medical image fusion algorithm combined with sparse representation and pulse coupling neural network is proposed. First, the source image is decomposed into low and high frequency subband coefficients by NSCT transform. Secondly, the K singular value decomposition (K-SVD) method is used to train the low frequency subband coefficients to get the overcomplete dictionary D, and the orthogonal matching pursuit (OMP) algorithm is used to sparse the low frequency subband coefficients to complete the fusion of the low frequency subband sparse coefficients. Then, the pulse coupling neural network (PCNN) is excited by the spatial frequency of the high frequency subband coefficients, and the fusion coefficients of the high frequency subband coefficients are selected according to the number of ignition times. Finally, the fusion medical image is reconstructed by NSCT inverter. The experimental results and analysis show that the algorithm of gray and color image fusion is about 34% and 10% higher than the contrast algorithm in the edge information transfer factor QAB/F index, and the performance of the fusion result is better than the existing algorithm.


Author(s):  
Mummadi Gowthami Reddy ◽  
Palagiri Veera Narayana Reddy ◽  
Patil Ramana Reddy

In the current era of technological development, medical imaging plays an important role in many applications of medical diagnosis and therapy. In this regard, medical image fusion could be a powerful tool to combine multi-modal images by using image processing techniques. But, conventional approaches failed to provide the effective image quality assessments and robustness of fused image. To overcome these drawbacks, in this work three-stage multiscale decomposition (TSMSD) using pulse-coupled neural networks with adaptive arguments (PCNN-AA) approach is proposed for multi-modal medical image fusion. Initially, nonsubsampled shearlet transform (NSST) is applied onto the source images to decompose them into low frequency and high frequency bands. Then, low frequency bands of both the source images are fused using nonlinear anisotropic filtering with discrete Karhunen–Loeve transform (NLAF-DKLT) methodology. Next, high frequency bands obtained from NSST are fused using PCNN-AA approach. Now, fused low frequency and high frequency bands are reconstructed using NSST reconstruction. Finally, band fusion rule algorithm with pyramid reconstruction is applied to get final fused medical image. Extensive simulation outcome discloses the superiority of proposed TSMSD using PCNN-AA approach as compared to state-of-the-art medical image fusion methods in terms of fusion quality metrics such as entropy (E), mutual information (MI), mean (M), standard deviation (STD), correlation coefficient (CC) and computational complexity.


Sign in / Sign up

Export Citation Format

Share Document