Medical Image Fusion Method Based on Coupled Neural P Systems in Nonsubsampled Shearlet Transform Domain

2020 ◽  
Vol 31 (01) ◽  
pp. 2050050 ◽  
Author(s):  
Bo Li ◽  
Hong Peng ◽  
Xiaohui Luo ◽  
Jun Wang ◽  
Xiaoxiao Song ◽  
...  

Coupled neural P (CNP) systems are a recently developed Turing-universal, distributed and parallel computing model, combining the spiking and coupled mechanisms of neurons. This paper focuses on how to apply CNP systems to handle the fusion of multi-modality medical images and proposes a novel image fusion method. Based on two CNP systems with local topology, an image fusion framework in nonsubsampled shearlet transform (NSST) domain is designed, where the two CNP systems are used to control the fusion of low-frequency NSST coefficients. The proposed fusion method is evaluated on 20 pairs of multi-modality medical images and compared with seven previous fusion methods and two deep-learning-based fusion methods. Quantitative and qualitative experimental results demonstrate the advantage of the proposed fusion method in terms of visual quality and fusion performance.

Optik ◽  
2015 ◽  
Vol 126 (20) ◽  
pp. 2508-2511 ◽  
Author(s):  
Jingjing Wang ◽  
Qian Li ◽  
Zhenhong Jia ◽  
Nikola Kasabov ◽  
Jie Yang

Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1362
Author(s):  
Hui Wan ◽  
Xianlun Tang ◽  
Zhiqin Zhu ◽  
Weisheng Li

Multi-focus image fusion is an important method used to combine the focused parts from source multi-focus images into a single full-focus image. Currently, to address the problem of multi-focus image fusion, the key is on how to accurately detect the focus regions, especially when the source images captured by cameras produce anisotropic blur and unregistration. This paper proposes a new multi-focus image fusion method based on the multi-scale decomposition of complementary information. Firstly, this method uses two groups of large-scale and small-scale decomposition schemes that are structurally complementary, to perform two-scale double-layer singular value decomposition of the image separately and obtain low-frequency and high-frequency components. Then, the low-frequency components are fused by a rule that integrates image local energy with edge energy. The high-frequency components are fused by the parameter-adaptive pulse-coupled neural network model (PA-PCNN), and according to the feature information contained in each decomposition layer of the high-frequency components, different detailed features are selected as the external stimulus input of the PA-PCNN. Finally, according to the two-scale decomposition of the source image that is structure complementary, and the fusion of high and low frequency components, two initial decision maps with complementary information are obtained. By refining the initial decision graph, the final fusion decision map is obtained to complete the image fusion. In addition, the proposed method is compared with 10 state-of-the-art approaches to verify its effectiveness. The experimental results show that the proposed method can more accurately distinguish the focused and non-focused areas in the case of image pre-registration and unregistration, and the subjective and objective evaluation indicators are slightly better than those of the existing methods.


2019 ◽  
Vol 9 (17) ◽  
pp. 3612
Author(s):  
Liao ◽  
Chen ◽  
Mo

As the focal length of an optical lens in a conventional camera is limited, it is usually arduous to obtain an image in which each object is focused. This problem can be solved by multi-focus image fusion. In this paper, we propose an entirely new multi-focus image fusion method based on decision map and sparse representation (DMSR). First, we obtained a decision map by analyzing low-scale images with sparse representation, measuring the effective clarity level, and using spatial frequency methods to process uncertain areas. Subsequently, the transitional area around the focus boundary was determined by the decision map, and we implemented the transitional area fusion based on sparse representation. The experimental results show that the proposed method is superior to the other five fusion methods, both in terms of visual effect and quantitative evaluation.


Sign in / Sign up

Export Citation Format

Share Document