fusion image
Recently Published Documents


TOTAL DOCUMENTS

166
(FIVE YEARS 45)

H-INDEX

11
(FIVE YEARS 3)

Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1670
Author(s):  
Xiaojun Lu ◽  
Libo Zhang ◽  
Lei Niu ◽  
Qing Chen ◽  
Jianping Wang

In the era of big data, it is challenging to efficiently retrieve the required images from the vast amount of data. Therefore, a content-based image retrieval system is an important research direction to address this problem. Furthermore, a multi-feature-based image retrieval system can compensate for the shortage of a single feature to a certain extent, which is essential for improving retrieval system performance. Feature selection and feature fusion strategies are critical in the study of multi-feature fusion image retrieval. This paper proposes a multi-feature fusion image retrieval strategy with adaptive features based on information entropy theory. Firstly, we extract the image features, construct the distance function to calculate the similarity using the information entropy proposed in this paper, and obtain the initial retrieval results. Then, we obtain the precision of single feature retrieval based on the correlation feedback as the retrieval trust and use the retrieval trust to select the effective features automatically. After that, we initialize the weights of selected features using the average weights, construct the probability transfer matrix, and use the PageRank algorithm to update the initialized feature weights to obtain the final weights. Finally, we calculate the comprehensive similarity based on the final weights and output the detection results. This has two advantages: (1) the proposed strategy uses multiple features for image retrieval, which has better performance and more substantial generalization than the retrieval strategy based on a single feature; (2) compared with the fixed-feature retrieval strategy, our method selects the best features for fusion in each query, which takes full advantages of each feature. The experimental results show that our proposed method outperforms other methods. In the datasets of Corel1k, UC Merced Land-Use, and RSSCN7, the top10 retrieval precision is 99.55%, 88.02%, and 88.28%, respectively. In the Holidays dataset, the mean average precision (mAP) was 92.46%.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7813
Author(s):  
Xiaoxue Xing ◽  
Cong Luo ◽  
Jian Zhou ◽  
Minghan Yan ◽  
Cheng Liu ◽  
...  

To get more obvious target information and more texture features, a new fusion method for the infrared (IR) and visible (VIS) images combining regional energy (RE) and intuitionistic fuzzy sets (IFS) is proposed, and this method can be described by several steps as follows. Firstly, the IR and VIS images are decomposed into low- and high-frequency sub-bands by non-subsampled shearlet transform (NSST). Secondly, RE-based fusion rule is used to obtain the low-frequency pre-fusion image, which allows the important target information preserved in the resulting image. Based on the pre-fusion image, the IFS-based fusion rule is introduced to achieve the final low-frequency image, which enables more important texture information transferred to the resulting image. Thirdly, the ‘max-absolute’ fusion rule is adopted to fuse high-frequency sub-bands. Finally, the fused image is reconstructed by inverse NSST. The TNO and RoadScene datasets are used to evaluate the proposed method. The simulation results demonstrate that the fused images of the proposed method have more obvious targets, higher contrast, more plentiful detailed information, and local features. Qualitative and quantitative analysis results show that the presented method is superior to the other nine advanced fusion methods.


2021 ◽  
Vol 15 ◽  
Author(s):  
Yixiang Lu ◽  
Rui Wang ◽  
Qingwei Gao ◽  
Dong Sun ◽  
De Zhu

Multi-modal image fusion integrates different images of the same scene collected by different sensors into one image, making the fused image recognizable by the computer and perceived by human vision easily. The traditional tensor decomposition is an approximate decomposition method and has been applied to image fusion. In this way, the image details may be lost in the process of fusion image reconstruction. To preserve the fine information of the images, an image fusion method based on tensor matrix product decomposition is proposed to fuse multi-modal images in this article. First, each source image is initialized into a separate third-order tensor. Then, the tensor is decomposed into a matrix product form by using singular value decomposition (SVD), and the Sigmoid function is used to fuse the features extracted in the decomposition process. Finally, the fused image is reconstructed by multiplying all the fused tensor components. Since the algorithm is based on a series of singular value decomposition, a stable closed solution can be obtained and the calculation is also simple. The experimental results show that the fusion image quality obtained by this algorithm is superior to other algorithms in both objective evaluation metrics and subjective evaluation.


Cureus ◽  
2021 ◽  
Author(s):  
Ryoma Aoyama ◽  
Ukei Anazawa ◽  
Hiraku Hotta ◽  
Itsuo Watanabe ◽  
Yuichiro Takahashi ◽  
...  

2021 ◽  
Vol 104 (9) ◽  
pp. 1471-1475

Objective: To determine the accuracy of prostate cancer detection by using magnetic resonance imaging-transrectal ultrasound (MRI-TRUS) fusion image-guided prostate biopsy. Materials and Methods: Retrospective data were collected from the patients that underwent targeted prostate biopsy guided by MRI-TRUS fusion imaging of the prostate between January 2017 and October 2018. The data including age, serum prostate-specific antigen (PSA) levels, PSA density, prostate size, lesion size from multiparametric magnetic resonance imaging of the prostate (mpMRI), Prostate Imaging and Reporting Archiving Data System score (PI-RADS), number of targeted core biopsy, and result from the pathological diagnosis were collected. Detection rate of prostate cancer was analyzed. Results: Ninety-five prostate cancer suspected patients underwent prostate biopsy for 143 lesions. Patients’ analyses showed better overall detection rate of prostate cancer from the MRI-TRUS fusion image-guided prostate biopsy compared to the extended 12-core systematic biopsy at 49.5% versus 17.9%. Significant prostate cancer with a Gleason score of more than 6 was detected by MRI-TRUS fusion image-guided prostate biopsy at 33.7%. Prostate cancer detection rates from MRI-TRUS fusion image-guided prostate biopsy categorized by PI-RADS score 3, 4, and 5 were 21%, 48%, and 74%, respectively, which showed statistically significant detection rate with higher PI-RADS score (p<0.001). Conclusion: The present study showed better prostate cancer detection rate using MRI-TRUS fusion image-guide prostate biopsy with correlation to higher PI-RADS score. Keywords: Gleason score; PI-RADS score; mpMRI; MRI-TRUS fusion image-guided prostate biopsy


Author(s):  
Evgeny A. Semenishchev ◽  
Viacheslav Voronin ◽  
Andrey Alepko ◽  
Salavat Urunov ◽  
Aleksandr Zelensky
Keyword(s):  

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Kai Guo ◽  
Xiongfei Li ◽  
Xiaohan Hu ◽  
Jichen Liu ◽  
Tiehu Fan

Abstract Background In medical diagnosis of brain, the role of multi-modal medical image fusion is becoming more prominent. Among them, there is no lack of filtering layered fusion and newly emerging deep learning algorithms. The former has a fast fusion speed but the fusion image texture is blurred; the latter has a better fusion effect but requires higher machine computing capabilities. Therefore, how to find a balanced algorithm in terms of image quality, speed and computing power is still the focus of all scholars. Methods We built an end-to-end Hahn-PCNN-CNN. The network is composed of feature extraction module, feature fusion module and image reconstruction module. We selected 8000 multi-modal brain medical images downloaded from the Harvard Medical School website to train the feature extraction layer and image reconstruction layer to enhance the network’s ability to reconstruct brain medical images. In the feature fusion module, we use the moments of the feature map combined with the pulse-coupled neural network to reduce the information loss caused by convolution in the previous fusion module and save time. Results We choose eight sets of registered multi-modal brain medical images in four diease to verify our model. The anatomical structure images are from MRI and the functional metabolism images are SPECT and 18F-FDG. At the same time, we also selected eight representative fusion models as comparative experiments. In terms of objective quality evaluation, we select six evaluation metrics in five categories to evaluate our model. Conclusions The fusion image obtained by our model can retain the effective information in source images to the greatest extent. In terms of image fusion evaluation metrics, our model is superior to other comparison algorithms. In terms of time computational efficiency, our model also performs well. In terms of robustness, our model is very stable and can be generalized to multi-modal image fusion of other organs.


2021 ◽  
pp. 1-10
Author(s):  
Lei Chen ◽  
Jun Han ◽  
Feng Tian

Fusing the infrared (IR) and visible images has many advantages and can be applied to applications such as target detection and recognition. Colors can give more accurate and distinct features, but the low resolution and low contrast of fused images make this a challenge task. In this paper, we proposed a method based on parallel generative adversarial networks (GANs) to address the challenge. We used IR image, visible image and fusion image as ground truth of ‘L’, ‘a’ and ‘b’ of the Lab model. Through the parallel GANs, we can gain the Lab data which can be converted to RGB image. We adopt TNO and RoadScene data sets to verify our method, and compare with five objective evaluation parameters obtained by other three methods based on deep learning (DL). It is demonstrated that the proposed approach is able to achieve better performance against state-of-arts methods.


Sign in / Sign up

Export Citation Format

Share Document