An Image Fusion Method Based on Shearlet and Multi-Decision

2014 ◽  
Vol 530-531 ◽  
pp. 390-393
Author(s):  
Yong Wang

Image processing is the basis of computer vision. Aiming at some problems existed in the traditional image fusion algorithm, a novel algorithm based on shearlet and multi-decision is proposed. At first we discussed multi-focus image fusion and then we use Shearlet transform and multi-decision for image decomposition high-frequency coefficients. Finally, the fused image is obtained through inverse Shearlet transform. Experimental results show that comparing with traditional image fusion algorithms, the proposed approach retains image detail and more clarity.

2014 ◽  
Vol 889-890 ◽  
pp. 1103-1106
Author(s):  
Xin Zheng ◽  
Ai Ping Cai

Image Fusion is an important and useful subject in Image Processing and Computer Vision. The traditional image fusion algorithm could not provide satisfactory fusion results. Aiming to solving this problem, in this paper, we proposed an algorithm based on shearlet and multi-decision. First we discussed the application of the shearlet transform. Then we use difference decision rules for image decomposition high-frequency coefficients. Finally, the fused image is obtained through inverse Shearlet transform. Experimental results show that comparing with traditional image fusion algorithms, the proposed approach can provide more satisfactory fusion outcome.


2010 ◽  
Vol 121-122 ◽  
pp. 373-378 ◽  
Author(s):  
Jia Zhao ◽  
Li Lü ◽  
Hui Sun

According to the different frequency areas decomposed by shearlet transform, the selection principles of the lowpass subbands and highpass subbands were discussed respectively. The lowpass subband coefficients of the fused image can be obtained by means of the fusion rule based on the region variation, the highpass subband coefficients can be selected by means of the fusion rule based on the region energy. Experimental results show that comparing with traditional image fusion algorithms, the proposed approach can provide more satisfactory fusion outcome.


2014 ◽  
Vol 14 (2) ◽  
pp. 102-108 ◽  
Author(s):  
Yong Yang ◽  
Shuying Huang ◽  
Junfeng Gao ◽  
Zhongsheng Qian

Abstract In this paper, by considering the main objective of multi-focus image fusion and the physical meaning of wavelet coefficients, a discrete wavelet transform (DWT) based fusion technique with a novel coefficients selection algorithm is presented. After the source images are decomposed by DWT, two different window-based fusion rules are separately employed to combine the low frequency and high frequency coefficients. In the method, the coefficients in the low frequency domain with maximum sharpness focus measure are selected as coefficients of the fused image, and a maximum neighboring energy based fusion scheme is proposed to select high frequency sub-bands coefficients. In order to guarantee the homogeneity of the resultant fused image, a consistency verification procedure is applied to the combined coefficients. The performance assessment of the proposed method was conducted in both synthetic and real multi-focus images. Experimental results demonstrate that the proposed method can achieve better visual quality and objective evaluation indexes than several existing fusion methods, thus being an effective multi-focus image fusion method.


2018 ◽  
Vol 7 (2.31) ◽  
pp. 165
Author(s):  
M Shyamala Devi ◽  
P Balamurugan

Image processing technology requires moreover the full image or the part of image which is to be processed from the user’s point of view like the radius of object etc. The main purpose of fusion is to diminish dissimilar error between the fused image and the input images. With respect to the medical diagnosis, the edges and outlines of the concerned objects is more important than extra information. So preserving the edge features of the image is worth for investigating the image fusion. The image with higher contrast contains more edge-like features. Here we propose a new medical image fusion scheme namely Local Energy Match NSCT based on discrete contourlet transformation, which is constructive to give the details of curve edges. It is used to progress the edge information of fused image by dropping the distortion. This transformation lead to crumbling of multimodal image addicted to finer and coarser details and finest details will be decayed into unusual resolution in dissimilar orientation. The input multimodal images namely CT and MRI images are first transformed by Non Sub sampled Contourlet Transformation (NSCT) which decomposes the image into low frequency and high frequency elements. In our system, the Low frequency coefficient of the image is fused by image averaging and Gabor filter bank algorithm. The processed High frequency coefficients of the image are fused by image averaging and gradient based fusion algorithm. Then the fused image is obtained by inverse NSCT with local energy match based coefficients. To evaluate the image fusion accuracy, Peak Signal to Noise Ratio (PSNR), Root Mean Square Error (RMSE) and Correlation Coefficient parameters are used in this work .


2014 ◽  
Vol 687-691 ◽  
pp. 3656-3661
Author(s):  
Min Fen Shen ◽  
Zhi Fei Su ◽  
Jin Yao Yang ◽  
Li Sha Sun

Because of the limit of the optical lens’s depth, the objects of different distance usually cannot be at the same focus in the same picture, but multi-focus image fusion can obtain fusion image with all goals clear, improving the utilization rate of the image information ,which is helpful to further computer processing. According to the imaging characteristics of multi-focus image, a multi-focus image fusion algorithm based on redundant wavelet transform is proposed in this paper. For different frequency domain of redundant wavelet decomposition, the selection principle of high-frequency coefficients and low-frequency coefficients is respectively discussed .The fusion rule is that,the selection of low frequency coefficient is based on the local area energy, and the high frequency coefficient is based on local variance combining with matching threshold. As can be seen from the simulation results, the method given in the paper is a good way to retain more useful information from the source image , getting a fusion image with all goals clear.


2013 ◽  
Vol 401-403 ◽  
pp. 1381-1384 ◽  
Author(s):  
Zi Juan Luo ◽  
Shuai Ding

t is mostly difficult to get an image that contains all relevant objects in focus, because of the limited depth-of-focus of optical lenses. The multifocus image fusion method can solve the problem effectively. Nonsubsampled Contourlet transform has varying directions and multiple scales. When the Nonsubsampled contourlet transform is introduced to image fusion, the characteristics of original images are taken better and more information for fusion is obtained. A new method of multi-focus image fusion based on Nonsubsampled contourlet transform (NSCT) with the fusion rule of region statistics is proposed in this paper. Firstly, different focus images are decomposed using Nonsubsampled contourlet transform. Then low-bands are integrated using the weighted average, high-bands are integrated using region statistics rule. Next the fused image will be obtained by inverse Nonsubsampled contourlet transform. Finally the experimental results are showed and compared with those of method based on Contourlet transform. Experiments show that the approach can achieve better results than the method based on contourlet transform.


2012 ◽  
Vol 239-240 ◽  
pp. 1432-1436
Author(s):  
Zhuan Zheng Zhao

Image Fusion is integrating two or more sensors at the same time or at different times of image or videos equenece to generate a new interpretation of this scene. Its main purpose is increasing reliability or image resolution by redueing uncertainty through redundancy of different images.In this paper, a image fusion method based on contourlet transform is presented. The algorithm can fuse corresponding information in different resolutions and directions, which makes the fused image clearer and more abundant in details. Meanwhile, because of the fuzzy logic’s capacity of resolving uncertain problems, it overcomes the drawbacks of traditional fusion algorithm based on contourlet transform, and integrates as much information as possible into the fused image.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1362
Author(s):  
Hui Wan ◽  
Xianlun Tang ◽  
Zhiqin Zhu ◽  
Weisheng Li

Multi-focus image fusion is an important method used to combine the focused parts from source multi-focus images into a single full-focus image. Currently, to address the problem of multi-focus image fusion, the key is on how to accurately detect the focus regions, especially when the source images captured by cameras produce anisotropic blur and unregistration. This paper proposes a new multi-focus image fusion method based on the multi-scale decomposition of complementary information. Firstly, this method uses two groups of large-scale and small-scale decomposition schemes that are structurally complementary, to perform two-scale double-layer singular value decomposition of the image separately and obtain low-frequency and high-frequency components. Then, the low-frequency components are fused by a rule that integrates image local energy with edge energy. The high-frequency components are fused by the parameter-adaptive pulse-coupled neural network model (PA-PCNN), and according to the feature information contained in each decomposition layer of the high-frequency components, different detailed features are selected as the external stimulus input of the PA-PCNN. Finally, according to the two-scale decomposition of the source image that is structure complementary, and the fusion of high and low frequency components, two initial decision maps with complementary information are obtained. By refining the initial decision graph, the final fusion decision map is obtained to complete the image fusion. In addition, the proposed method is compared with 10 state-of-the-art approaches to verify its effectiveness. The experimental results show that the proposed method can more accurately distinguish the focused and non-focused areas in the case of image pre-registration and unregistration, and the subjective and objective evaluation indicators are slightly better than those of the existing methods.


2021 ◽  
Vol 51 (2) ◽  
Author(s):  
Yingchun Wu, , , , ◽  
Xing Cheng ◽  
Jie Liang ◽  
Anhong Wang ◽  
Xianling Zhao

Traditional light field all-in-focus image fusion algorithms are based on the digital refocusing technique. Multi-focused images converted from one single light field image are used to calculate the all-in-focus image and the light field spatial information is used to accomplish the sharpness evaluation. Analyzing the 4D light field from another perspective, an all-in-focus image fusion algorithm based on angular information is presented in this paper. In the proposed method, the 4D light field data are fused directly and a macro-pixel energy difference function based on angular information is established to accomplish the sharpness evaluation. Then the fused 4D data is guided by the dimension increased central sub-aperture image to obtain the refined 4D data. Finally, the all-in-focus image is calculated by integrating the refined 4D light field data. Experimental results show that the fused images calculated by the proposed method have higher visual quality. Quantitative evaluation results also demonstrate the performance of the proposed algorithm. With the light field angular information, the image feature-based index and human perception inspired index of the fused image are improved.


Sign in / Sign up

Export Citation Format

Share Document