image fusion method
Recently Published Documents


TOTAL DOCUMENTS

461
(FIVE YEARS 144)

H-INDEX

18
(FIVE YEARS 4)

Author(s):  
Hui Zhang ◽  
Xinning Han ◽  
Rui Zhang

In the process of multimodal image fusion, how to improve the visual effect after the image fused, while taking into account the protection of energy and the extraction of details, has attracted more and more attention in recent years. Based on the research of visual saliency and the final action-level measurement of the base layer, a multimodal image fusion method based on a guided filter is proposed in this paper. Firstly, multi-scale decomposition of a guided filter is used to decompose the two source images into a small-scale layer, large-scale layer and base layer. The fusion rule of the maximum absolute value is adopted in the small-scale layer, the weight fusion rule based on regular visual parameters is adopted in the large-scale layer and the fusion rule based on activity-level measurement is adopted in the base layer. Finally, the fused three scales are laminated into the final fused image. The experimental results show that the proposed method can improve the image edge processing and visual effect in multimodal image fusion.


2021 ◽  
pp. 1-20
Author(s):  
Yun Wang ◽  
Xin Jin ◽  
Jie Yang ◽  
Qian Jiang ◽  
Yue Tang ◽  
...  

Multi-focus image fusion is a technique that integrates the focused areas in a pair or set of source images with the same scene into a fully focused image. Inspired by transfer learning, this paper proposes a novel color multi-focus image fusion method based on deep learning. First, color multi-focus source images are fed into VGG-19 network, and the parameters of convolutional layer of the VGG-19 network are then migrated to a neural network containing multilayer convolutional layers and multilayer skip-connection structures for feature extraction. Second, the initial decision maps are generated using the reconstructed feature maps of a deconvolution module. Third, the initial decision maps are refined and processed to obtain the second decision maps, and then the source images are fused to obtain the initial fused images based on the second decision maps. Finally, the final fused image is produced by comparing the Q ABF metrics of the initial fused images. The experimental results show that the proposed method can effectively improve the segmentation performance of the focused and unfocused areas in the source images, and the generated fused images are superior in both subjective and objective metrics compared with most contrast methods.


Diagnostics ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 2379
Author(s):  
Yin Dai ◽  
Yumeng Song ◽  
Weibin Liu ◽  
Wenhe Bai ◽  
Yifan Gao ◽  
...  

Parkinson’s disease (PD) is a common neurodegenerative disease that has a significant impact on people’s lives. Early diagnosis is imperative since proper treatment stops the disease’s progression. With the rapid development of CAD techniques, there have been numerous applications of computer-aided diagnostic (CAD) techniques in the diagnosis of PD. In recent years, image fusion has been applied in various fields and is valuable in medical diagnosis. This paper mainly adopts a multi-focus image fusion method primarily based on deep convolutional neural networks to fuse magnetic resonance images (MRI) and positron emission tomography (PET) neural photographs into multi-modal images. Additionally, the study selected Alexnet, Densenet, ResNeSt, and Efficientnet neural networks to classify the single-modal MRI dataset and the multi-modal dataset. The test accuracy rates of the single-modal MRI dataset are 83.31%, 87.76%, 86.37%, and 86.44% on the Alexnet, Densenet, ResNeSt, and Efficientnet, respectively. Moreover, the test accuracy rates of the multi-modal fusion dataset on the Alexnet, Densenet, ResNeSt, and Efficientnet are 90.52%, 97.19%, 94.15%, and 93.39%. As per all four networks discussed above, it can be concluded that the test results for the multi-modal dataset are better than those for the single-modal MRI dataset. The experimental results showed that the multi-focus image fusion method according to deep learning can enhance the accuracy of PD image classification.


Diagnostics ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 2340
Author(s):  
Cheng-Chun Lee ◽  
Kuang-Hsi Chang ◽  
Feng-Mao Chiu ◽  
Yen-Chuan Ou ◽  
Jen-I. Hwang ◽  
...  

The intravoxel incoherent motion (IVIM) model may enhance the clinical value of multiparametric magnetic resonance imaging (mpMRI) in the detection of prostate cancer (PCa). However, while past IVIM modeling studies have shown promise, they have also reported inconsistent results and limitations, underscoring the need to further enhance the accuracy of IVIM modeling for PCa detection. Therefore, this study utilized the control point registration toolbox function in MATLAB to fuse T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) MRI images with whole-mount pathology specimen images in order to eliminate potential bias in IVIM calculations. Sixteen PCa patients underwent prostate MRI scans before undergoing radical prostatectomies. The image fusion method was then applied in calculating the patients’ IVIM parameters. Furthermore, MRI scans were also performed on 22 healthy young volunteers in order to evaluate the changes in IVIM parameters with aging. Among the full study cohort, the f parameter was significantly increased with age, while the D* parameter was significantly decreased. Among the PCa patients, the D and ADC parameters could differentiate PCa tissue from contralateral normal tissue, while the f and D* parameters could not. The presented image fusion method also provided improved precision when comparing regions of interest side by side. However, further studies with more standardized methods are needed to further clarify the benefits of the presented approach and the different IVIM parameters in PCa characterization.


2021 ◽  
Vol 133 ◽  
pp. 108438
Author(s):  
Xiang Liu ◽  
Julian Frey ◽  
Martin Denter ◽  
Katarzyna Zielewska-Büttner ◽  
Nicole Still ◽  
...  

2021 ◽  
Vol 2137 (1) ◽  
pp. 012061
Author(s):  
Jiebin Zhang ◽  
Shangyou Zeng ◽  
Ying Wang ◽  
Jinjin Wang ◽  
Hongyang Chen

Abstract Since the existing commercial imaging equipment cannot meet the requirements of high dynamic range, multi-exposure image fusion is an economical and fast method to implement HDR. However, the existing multi-exposure image fusion algorithms have the problems of long fusion time and large data storage. We propose an extreme exposure image fusion method based on deep learning. In this method, two extreme exposure image sequences are sent to the network, channel and spatial attention mechanisms are introduced to automatically learn and optimize the weights, and the optimal fusion weights are output. In addition, the model in this paper adopts real-value training and makes the output closer to the real value through a new custom loss function. Experimental results show that this method is superior to existing methods in both objective and subjective aspects.


Optik ◽  
2021 ◽  
Vol 248 ◽  
pp. 168084
Author(s):  
Jingwen Zhou ◽  
Kan Ren ◽  
Minjie Wan ◽  
Bo Cheng ◽  
Guohua Gu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document