An Image Fusion Algorithm Based on Pseudo-Color

2012 ◽  
Vol 433-440 ◽  
pp. 5436-5442
Author(s):  
Lei Li

The pseudo-color processing for target identification and tracking is very meaningful Experimental results show that the pseudo-color image fusion is a very effective methods. This paper presents a false color image fusion based on the new method. Fusion using wavelet transform grayscale images, find the gray fused image and the difference between the original image, respectively, as the image of l, α, β components are color fusion image, and then after the color transformation, the final false color fused image. The results showed that the color fusion image colors more vivid, more in line with human visual characteristics.

2017 ◽  
Vol 31 (19-21) ◽  
pp. 1740043 ◽  
Author(s):  
Jinling Zhao ◽  
Junjie Guo ◽  
Wenjie Cheng ◽  
Chao Xu ◽  
Linsheng Huang

A cross-comparison method was used to assess the SPOT-6 optical satellite imagery against Chinese GF-1 imagery using three types of indicators: spectral and color quality, fusion effect and identification potential. More specifically, spectral response function (SRF) curves were used to compare the two imagery, showing that the SRF curve shape of SPOT-6 is more like a rectangle compared to GF-1 in blue, green, red and near-infrared bands. NNDiffuse image fusion algorithm was used to evaluate the capability of information conservation in comparison with wavelet transform (WT) and principal component (PC) algorithms. The results show that NNDiffuse fused image has extremely similar entropy vales than original image (1.849 versus 1.852) and better color quality. In addition, the object-oriented classification toolset (ENVI EX) was used to identify greenlands for comparing the effect of self-fusion image of SPOT-6 and inter-fusion image between SPOT-6 and GF-1 based on the NNDiffuse algorithm. The overall accuracy is 97.27% and 76.88%, respectively, showing that self-fused image of SPOT-6 has better identification capability.


2011 ◽  
Vol 1 (3) ◽  
Author(s):  
T. Sumathi ◽  
M. Hemalatha

AbstractImage fusion is the method of combining relevant information from two or more images into a single image resulting in an image that is more informative than the initial inputs. Methods for fusion include discrete wavelet transform, Laplacian pyramid based transform, curvelet based transform etc. These methods demonstrate the best performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion. In particular, wavelet transform has good time-frequency characteristics. However, this characteristic cannot be extended easily to two or more dimensions with separable wavelet experiencing limited directivity when spanning a one-dimensional wavelet. This paper introduces the second generation curvelet transform and uses it to fuse images together. This method is compared against the others previously described to show that useful information can be extracted from source and fused images resulting in the production of fused images which offer clear, detailed information.


2021 ◽  
Author(s):  
Anuyogam Venkataraman

With the increasing utilization of X-ray Computed Tomography (CT) in medical diagnosis, obtaining higher quality image with lower exposure to radiation is a highly challenging task in image processing. Sparse representation based image fusion is one of the sought after fusion techniques among the current researchers. A novel image fusion algorithm based on focused vector detection is proposed in this thesis. Firstly, the initial fused vector is acquired by combining common and innovative sparse components of multi-dosage ensemble using Joint Sparse PCA fusion method utilizing an overcomplete dictionary trained using high dose images of the same region of interest from different patients. And then, the strongly focused vector is obtained by determining the pixels of low dose and medium dose vectors which have high similarity with the pixels of the initial fused vector using certain quantitative metrics. Final fused image is obtained by denoising and simultaneously integrating the strongly focused vector, initial fused vector and source image vectors in joint sparse domain thereby preserving the edges and other critical information needed for diagnosis. This thesis demonstrates the effectiveness of the proposed algorithms when experimented on different images and the qualitative and quantitative results are compared with some of the widely used image fusion methods.


2017 ◽  
pp. 711-723
Author(s):  
Vikrant Bhateja ◽  
Abhinav Krishn ◽  
Himanshi Patel ◽  
Akanksha Sahu

Medical image fusion facilitates the retrieval of complementary information from medical images and has been employed diversely for computer-aided diagnosis of life threatening diseases. Fusion has been performed using various approaches such as Pyramidal, Multi-resolution, multi-scale etc. Each and every approach of fusion depicts only a particular feature (i.e. the information content or the structural properties of an image). Therefore, this paper presents a comparative analysis and evaluation of multi-modal medical image fusion methodologies employing wavelet as a multi-resolution approach and ridgelet as a multi-scale approach. The current work tends to highlight upon the utility of these approaches according to the requirement of features in the fused image. Principal Component Analysis (PCA) based fusion algorithm has been employed in both ridgelet and wavelet domains for purpose of minimisation of redundancies. Simulations have been performed for different sets of MR and CT-scan images taken from ‘The Whole Brain Atlas'. The performance evaluation has been carried out using different parameters of image quality evaluation like: Entropy (E), Fusion Factor (FF), Structural Similarity Index (SSIM) and Edge Strength (QFAB). The outcome of this analysis highlights the trade-off between the retrieval of information content and the morphological details in finally fused image in wavelet and ridgelet domains.


2018 ◽  
Vol 7 (2.31) ◽  
pp. 165
Author(s):  
M Shyamala Devi ◽  
P Balamurugan

Image processing technology requires moreover the full image or the part of image which is to be processed from the user’s point of view like the radius of object etc. The main purpose of fusion is to diminish dissimilar error between the fused image and the input images. With respect to the medical diagnosis, the edges and outlines of the concerned objects is more important than extra information. So preserving the edge features of the image is worth for investigating the image fusion. The image with higher contrast contains more edge-like features. Here we propose a new medical image fusion scheme namely Local Energy Match NSCT based on discrete contourlet transformation, which is constructive to give the details of curve edges. It is used to progress the edge information of fused image by dropping the distortion. This transformation lead to crumbling of multimodal image addicted to finer and coarser details and finest details will be decayed into unusual resolution in dissimilar orientation. The input multimodal images namely CT and MRI images are first transformed by Non Sub sampled Contourlet Transformation (NSCT) which decomposes the image into low frequency and high frequency elements. In our system, the Low frequency coefficient of the image is fused by image averaging and Gabor filter bank algorithm. The processed High frequency coefficients of the image are fused by image averaging and gradient based fusion algorithm. Then the fused image is obtained by inverse NSCT with local energy match based coefficients. To evaluate the image fusion accuracy, Peak Signal to Noise Ratio (PSNR), Root Mean Square Error (RMSE) and Correlation Coefficient parameters are used in this work .


2018 ◽  
Vol 13 ◽  
pp. 174830181879151
Author(s):  
Qiang Yang ◽  
Huajun Wang

To solve the problem of high time and space complexity of traditional image fusion algorithms, this paper elaborates the framework of image fusion algorithm based on compressive sensing theory. A new image fusion algorithm based on improved K-singular value decomposition and Hadamard measurement matrix is proposed. This proposed algorithm only acts on a small amount of measurement data after compressive sensing sampling, which greatly reduces the number of pixels involved in the fusion and improves the time and space complexity of fusion. In the fusion experiments of full-color image with multispectral image, infrared image with visible light image, as well as multispectral image with full-color image, this proposed algorithm achieved good experimental results in the evaluation parameters of information entropy, standard deviation, average gradient, and mutual information.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 90760-90778 ◽  
Author(s):  
Shuaiqi Liu ◽  
Jian Ma ◽  
Lu Yin ◽  
Hailiang Li ◽  
Shuai Cong ◽  
...  

2011 ◽  
Vol 135-136 ◽  
pp. 341-346
Author(s):  
Na Ding ◽  
Jiao Bo Gao ◽  
Jun Wang

A novel system of implementing target identification with hyperspectral imaging system based on acousto-optic tunable filter (AOTF) was proposed. The system consists of lens, AOTF, AOTF driver, CCD and image collection installation. Owing to the high spatial and spectral resolution, the system can operate in the spectral range from visible light to near infrared band. An experiment of detecting and recognizing of two different kinds of camouflage armets from background was presented. When the characteristic spectral wave bands are 680nm and 750nm, the two camouflage armets exhibit different spectral characteristic. The target camouflage armets in the hyperspectral images are distinct from background and the contrast of armets and background is increased. The image fusion, target segmentation and pick-up of those images with especial spectral characteristics were realized by the Hyperspectral Imaging System. The 600nm, 680nm, and 750nm images were processed by the Pseudo color fusion algorithm, thus the camouflage armets are more easily observed by naked eyes. Experimental results confirm that AOTF hyperspectral imaging system can acquire image of high contrast, and has the ability of detecting and identification camouflage objects.


2014 ◽  
Vol 687-691 ◽  
pp. 3656-3661
Author(s):  
Min Fen Shen ◽  
Zhi Fei Su ◽  
Jin Yao Yang ◽  
Li Sha Sun

Because of the limit of the optical lens’s depth, the objects of different distance usually cannot be at the same focus in the same picture, but multi-focus image fusion can obtain fusion image with all goals clear, improving the utilization rate of the image information ,which is helpful to further computer processing. According to the imaging characteristics of multi-focus image, a multi-focus image fusion algorithm based on redundant wavelet transform is proposed in this paper. For different frequency domain of redundant wavelet decomposition, the selection principle of high-frequency coefficients and low-frequency coefficients is respectively discussed .The fusion rule is that,the selection of low frequency coefficient is based on the local area energy, and the high frequency coefficient is based on local variance combining with matching threshold. As can be seen from the simulation results, the method given in the paper is a good way to retain more useful information from the source image , getting a fusion image with all goals clear.


2013 ◽  
Vol 401-403 ◽  
pp. 1381-1384 ◽  
Author(s):  
Zi Juan Luo ◽  
Shuai Ding

t is mostly difficult to get an image that contains all relevant objects in focus, because of the limited depth-of-focus of optical lenses. The multifocus image fusion method can solve the problem effectively. Nonsubsampled Contourlet transform has varying directions and multiple scales. When the Nonsubsampled contourlet transform is introduced to image fusion, the characteristics of original images are taken better and more information for fusion is obtained. A new method of multi-focus image fusion based on Nonsubsampled contourlet transform (NSCT) with the fusion rule of region statistics is proposed in this paper. Firstly, different focus images are decomposed using Nonsubsampled contourlet transform. Then low-bands are integrated using the weighted average, high-bands are integrated using region statistics rule. Next the fused image will be obtained by inverse Nonsubsampled contourlet transform. Finally the experimental results are showed and compared with those of method based on Contourlet transform. Experiments show that the approach can achieve better results than the method based on contourlet transform.


Sign in / Sign up

Export Citation Format

Share Document