Multi-source Image Fusion Technology in Different Fields of View

Author(s):  
Si Tian ◽  
Junju Zhang ◽  
Yihui Yuan ◽  
Benkang Chang
2021 ◽  
Author(s):  
Anuyogam Venkataraman

With the increasing utilization of X-ray Computed Tomography (CT) in medical diagnosis, obtaining higher quality image with lower exposure to radiation is a highly challenging task in image processing. Sparse representation based image fusion is one of the sought after fusion techniques among the current researchers. A novel image fusion algorithm based on focused vector detection is proposed in this thesis. Firstly, the initial fused vector is acquired by combining common and innovative sparse components of multi-dosage ensemble using Joint Sparse PCA fusion method utilizing an overcomplete dictionary trained using high dose images of the same region of interest from different patients. And then, the strongly focused vector is obtained by determining the pixels of low dose and medium dose vectors which have high similarity with the pixels of the initial fused vector using certain quantitative metrics. Final fused image is obtained by denoising and simultaneously integrating the strongly focused vector, initial fused vector and source image vectors in joint sparse domain thereby preserving the edges and other critical information needed for diagnosis. This thesis demonstrates the effectiveness of the proposed algorithms when experimented on different images and the qualitative and quantitative results are compared with some of the widely used image fusion methods.


Author(s):  
Chengfang Zhang

Multifocus image fusion can obtain an image with all objects in focus, which is beneficial for understanding the target scene. Multiscale transform (MST) and sparse representation (SR) have been widely used in multifocus image fusion. However, the contrast of the fused image is lost after multiscale reconstruction, and fine details tend to be smoothed for SR-based fusion. In this paper, we propose a fusion method based on MST and convolutional sparse representation (CSR) to address the inherent defects of both the MST- and SR-based fusion methods. MST is first performed on each source image to obtain the low-frequency components and detailed directional components. Then, CSR is applied in the low-pass fusion, while the high-pass bands are fused using the popular “max-absolute” rule as the activity level measurement. The fused image is finally obtained by performing inverse MST on the fused coefficients. The experimental results on multifocus images show that the proposed algorithm exhibits state-of-the-art performance in terms of definition.


2020 ◽  
Vol 10 (3) ◽  
pp. 1171 ◽  
Author(s):  
Chengxi Li ◽  
Andrew Zhu

With the accelerated development of medical imaging equipment and techniques, image fusion technology has been effectively applied for diagnosis, biopsy and radiofrequency ablation, especially for liver tumor. Tumor treatment relying on a single medical imaging modality might face challenges, due to the deep positioning of the lesions, operation history and the specific background conditions of the liver disease. Image fusion technology has been employed to address these challenges. Using the image fusion technology, one could obtain real-time anatomical imaging superimposed by functional images showing the same plane to facilitate the diagnosis and treatments of liver tumors. This paper presents a review of the key principles of image fusion technology, its application in tumor treatments, particularly in liver tumors, and concludes with a discussion of the limitations and prospects of the image fusion technology.


2020 ◽  
pp. 028418512093447
Author(s):  
Koji Tokunaga ◽  
Akihiro Furuta ◽  
Yusuke Iizuka ◽  
Hiroyoshi Isoda ◽  
Kaori Togashi

Background Ultrasonography (US) is useful when implanting fiducial markers in the liver. However, the implant position is sometimes lost. Recently, real-time image fusion technology (Volume Navigation [V-navi]; GE Healthcare, Milwaukee, WI, USA) has been introduced as a technique for using images from different modalities, and its utility for fiducial marker implantation has been hypothesized. Purpose To evaluate the utility of US-guided fiducial marker implantation in the liver using V-navi compared to conventional US. Material and Methods We retrospectively reviewed 35 patients who underwent fiducial marker implantation for stereotactic body radiation therapy of liver tumors in 2013–2018. To avoid artifacts obscuring the tumor, the target point of implantation was set 10 mm cranial or caudal to the tumor. Marker implantation was then performed using US alone (US group, n = 24) or V-navi with computed tomography (CT) or magnetic resonance imaging (V-navi group, n = 11). Postprocedural CT was evaluated to determine technical success, distances between marker and either tumor surface or target point, and whether marker-induced artifacts obscured the tumor. Complications were also evaluated. Results were compared between groups. Results Technical success was obtained in 33 patients. Distance between the tumor and marker showed no significant difference between groups. Distance between target point and marker was shorter in the V-navi group ( P = 0.0093). Tumor-obscuring artifacts were seen in 12 patients (V-navi group, n = 1; US group, n = 11; P = 0.055). The only complication was minor bleeding in the US group (n = 1). Conclusion V-navi appears useful for US-guided fiducial marker implantation in the liver compared with conventional US.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4556 ◽  
Author(s):  
Yaochen Liu ◽  
Lili Dong ◽  
Yuanyuan Ji ◽  
Wenhai Xu

In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images.


Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1423
Author(s):  
Kai Guo ◽  
Xiongfei Li ◽  
Hongrui Zang ◽  
Tiehu Fan

In order to obtain the physiological information and key features of source images to the maximum extent, improve the visual effect and clarity of the fused image, and reduce the computation, a multi-modal medical image fusion framework based on feature reuse is proposed. The framework consists of intuitive fuzzy processing (IFP), capture image details network (CIDN), fusion, and decoding. First, the membership function of the image is redefined to remove redundant features and obtain the image with complete features. Then, inspired by DenseNet, we proposed a new encoder to capture all the medical information features in the source image. In the fusion layer, we calculate the weight of each feature graph in the required fusion coefficient according to the trajectory of the feature graph. Finally, the filtered medical information is spliced and decoded to reproduce the required fusion image. In the encoding and image reconstruction networks, the mixed loss function of cross entropy and structural similarity is adopted to greatly reduce the information loss in image fusion. To assess performance, we conducted three sets of experiments on medical images of different grayscales and colors. Experimental results show that the proposed algorithm has advantages not only in detail and structure recognition but also in visual features and time complexity compared with other algorithms.


Sign in / Sign up

Export Citation Format

Share Document