A Multi-Focus Image Fusion Based on Wavelet and Block-Dividing

2015 ◽  
Vol 719-720 ◽  
pp. 988-993
Author(s):  
Hui Zhu Ma ◽  
Qi Gui Nie

The traditional fusion rules of multi-focus image are largely centered on the fusion rule of high frequency coefficients, and those rules are all based on single pixel. Which leads to serious ringing effect, and reduces the visual effect of fusion image. The energy of an image is concentrated in the low frequency part after Wavelet Transform, and multi-focus image has the characteristic that the vast majority of adjacent pixels are either the clear area, or the blur area. Based on the above analysis, a new fusion method to multi-focus image is presented in this paper. The simulation results show that the proposed method is more feasible than common methods in processing multi-focus image.

2014 ◽  
Vol 687-691 ◽  
pp. 3656-3661
Author(s):  
Min Fen Shen ◽  
Zhi Fei Su ◽  
Jin Yao Yang ◽  
Li Sha Sun

Because of the limit of the optical lens’s depth, the objects of different distance usually cannot be at the same focus in the same picture, but multi-focus image fusion can obtain fusion image with all goals clear, improving the utilization rate of the image information ,which is helpful to further computer processing. According to the imaging characteristics of multi-focus image, a multi-focus image fusion algorithm based on redundant wavelet transform is proposed in this paper. For different frequency domain of redundant wavelet decomposition, the selection principle of high-frequency coefficients and low-frequency coefficients is respectively discussed .The fusion rule is that,the selection of low frequency coefficient is based on the local area energy, and the high frequency coefficient is based on local variance combining with matching threshold. As can be seen from the simulation results, the method given in the paper is a good way to retain more useful information from the source image , getting a fusion image with all goals clear.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1362
Author(s):  
Hui Wan ◽  
Xianlun Tang ◽  
Zhiqin Zhu ◽  
Weisheng Li

Multi-focus image fusion is an important method used to combine the focused parts from source multi-focus images into a single full-focus image. Currently, to address the problem of multi-focus image fusion, the key is on how to accurately detect the focus regions, especially when the source images captured by cameras produce anisotropic blur and unregistration. This paper proposes a new multi-focus image fusion method based on the multi-scale decomposition of complementary information. Firstly, this method uses two groups of large-scale and small-scale decomposition schemes that are structurally complementary, to perform two-scale double-layer singular value decomposition of the image separately and obtain low-frequency and high-frequency components. Then, the low-frequency components are fused by a rule that integrates image local energy with edge energy. The high-frequency components are fused by the parameter-adaptive pulse-coupled neural network model (PA-PCNN), and according to the feature information contained in each decomposition layer of the high-frequency components, different detailed features are selected as the external stimulus input of the PA-PCNN. Finally, according to the two-scale decomposition of the source image that is structure complementary, and the fusion of high and low frequency components, two initial decision maps with complementary information are obtained. By refining the initial decision graph, the final fusion decision map is obtained to complete the image fusion. In addition, the proposed method is compared with 10 state-of-the-art approaches to verify its effectiveness. The experimental results show that the proposed method can more accurately distinguish the focused and non-focused areas in the case of image pre-registration and unregistration, and the subjective and objective evaluation indicators are slightly better than those of the existing methods.


2013 ◽  
Vol 834-836 ◽  
pp. 1011-1015 ◽  
Author(s):  
Nian Yi Wang ◽  
Wei Lan Wang ◽  
Xiao Ran Guo

A new image fusion algorithm based on nonsubsampled contourlet transform and spiking cortical model is proposed in this paper. Considering the human visual system characteristics, two different fusion rules are used to fuse the low and high frequency sub-bands of nonsubsampled contourlet transform respectively. A new maximum selection rule is defined to fuse low frequency coefficients. Spatial frequency is used for the fusion rule of high frequency coefficients. Experimental results demonstrate the effectiveness of the proposed fusion method.


2014 ◽  
Vol 530-531 ◽  
pp. 394-402
Author(s):  
Ze Tao Jiang ◽  
Li Wen Zhang ◽  
Le Zhou

At present, image fusion universally exists problem that fuzzy edge, sparse texture. To solve this problem, this study proposes an image fusion method based on the combination of Lifting Wavelet and Median Filter. The method adopts different fusion rules. For the low frequency coefficient, the low frequency scale coefficients have had the convolution do the square respectively to get enhanced edge of the image fusion. Then the details information of original image is extracted by measuring region characteristics. For high frequency coefficient, the high frequency parts are denoised by the Median Filter, and then neighborhood spatial frequency and consistency verification fusion rule is adopted to the fusion of detail sub-images. Compared with Weighted Average and Regional Energy , experimental results show that edge and texture information are the most. Method in study solves the fuzzy edge and sparse texture in a certain degree,which has strong practical value in image fusion.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Tao Zhou ◽  
Huiling Lu ◽  
Fuyuan Hu ◽  
Hongbin Shi ◽  
Shi Qiu ◽  
...  

A new robust adaptive fusion method for double-modality medical image PET/CT is proposed according to the Piella framework. The algorithm consists of the following three steps. Firstly, the registered PET and CT images are decomposed using the nonsubsampled contourlet transform (NSCT). Secondly, in order to highlight the lesions of the low-frequency image, low-frequency components are fused by pulse-coupled neural network (PCNN) that has a higher sensitivity to featured area with low intensities. With regard to high-frequency subbands, the Gauss random matrix is used for compression measurements, histogram distance between the every two corresponding subblocks of high coefficient is employed as match measure, and regional energy is used as activity measure. The fusion factor d is then calculated by using the match measure and the activity measure. The high-frequency measurement value is fused according to the fusion factor, and high-frequency fusion image is reconstructed by using the orthogonal matching pursuit algorithm of the high-frequency measurement after fusion. Thirdly, the final image is acquired through the NSCT inverse transformation of the low-frequency fusion image and the reconstructed high-frequency fusion image. To validate the proposed algorithm, four comparative experiments were performed: comparative experiment with other image fusion algorithms, comparison of different activity measures, different match measures, and PET/CT fusion results of lung cancer (20 groups). The experimental results showed that the proposed algorithm could better retain and show the lesion information, and is superior to other fusion algorithms based on both the subjective and objective evaluations.


Multi-focus image fusion is the process of integration of pictures of the equivalent view and having various targets into one image. The direct capturing of a 3D scene image is challenging, many multi-focus image fusion techniques are involved in generating it from some images focusing at diverse depths. The two important factors for image fusion is activity level information and fusion rule. The necessity of designing local filters for extracting high-frequency details the activity level information is being implemented, and then by using various elaborated designed rules we consider clarity information of different source images which can obtain a clarity/focus map. However, earlier fusion algorithms will excerpt high-frequency facts by considering neighboring filters and by adopting various fusion conventions to achieve the fused image. However, the performance of the prevailing techniques is hardly adequate. Convolutional neural networks have recently used to solve the problem of multi-focus image fusion. By considering the deep neural network a two-stage boundary aware is proposed to address the issue in this paper. They are: (1) for extracting the entire defocus info of the two basis images deep network is suggested. (2) To handle the patches information extreme away from and close to the focused/defocused boundary, we use Inception ResNet v2. The results illustrate that the approach specified in this paper will result in an agreeable fusion image, which is superior to some of the advanced fusion algorithms in comparison with both the graphical and objective evaluations.


2013 ◽  
Vol 427-429 ◽  
pp. 1589-1592
Author(s):  
Zhong Jie Xiao

The study proposed an improved NSCT fusion method based on the infrared and visible light images characteristics and fusion requirement. This paper improved the high-frequency coefficient and low-frequency coefficient fusion rules. The low-frequency sub-band images adopted the pixel feature energy weighted fusion rule. The high-frequency sub-band images adopted the neighborhood variance feature information fusion rule. The fusion experiment results show that this algorithm has good robustness. It could effectively extract edges and texture information. The fused images have abundance scene information and clear target. So this algorithm is an effective infrared and visible image fusion method.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7813
Author(s):  
Xiaoxue Xing ◽  
Cong Luo ◽  
Jian Zhou ◽  
Minghan Yan ◽  
Cheng Liu ◽  
...  

To get more obvious target information and more texture features, a new fusion method for the infrared (IR) and visible (VIS) images combining regional energy (RE) and intuitionistic fuzzy sets (IFS) is proposed, and this method can be described by several steps as follows. Firstly, the IR and VIS images are decomposed into low- and high-frequency sub-bands by non-subsampled shearlet transform (NSST). Secondly, RE-based fusion rule is used to obtain the low-frequency pre-fusion image, which allows the important target information preserved in the resulting image. Based on the pre-fusion image, the IFS-based fusion rule is introduced to achieve the final low-frequency image, which enables more important texture information transferred to the resulting image. Thirdly, the ‘max-absolute’ fusion rule is adopted to fuse high-frequency sub-bands. Finally, the fused image is reconstructed by inverse NSST. The TNO and RoadScene datasets are used to evaluate the proposed method. The simulation results demonstrate that the fused images of the proposed method have more obvious targets, higher contrast, more plentiful detailed information, and local features. Qualitative and quantitative analysis results show that the presented method is superior to the other nine advanced fusion methods.


Author(s):  
Cheng Zhao ◽  
Yongdong Huang

The rolling guidance filtering (RGF) has a good characteristic which can smooth texture and preserve the edges, and non-subsampled shearlet transform (NSST) has the features of translation invariance and direction selection based on which a new infrared and visible image fusion method is proposed. Firstly, the rolling guidance filter is used to decompose infrared and visible images into the base and detail layers. Then, the NSST is utilized on the base layer to get the high-frequency coefficients and low-frequency coefficients. The fusion of low-frequency coefficients uses visual saliency map as a fusion rule, and the coefficients of the high-frequency subbands use gradient domain guided filtering (GDGF) and improved Laplacian sum to fuse coefficients. Finally, the fusion of the detail layers combines phase congruency and gradient domain guided filtering as the fusion rule. As a result, the proposed method can not only extract the infrared targets, but also fully preserves the background information of the visible images. Experimental results indicate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.


Author(s):  
Yahui Zhu ◽  
Li Gao

To overcome the shortcomings of traditional image fusion algorithms based on multiscale transform, an infrared and visible image fusion method based on compound decomposition and intuitionistic fuzzy set is proposed. Firstly, the non-subsampled contour transform is used to decompose the source image into low-frequency coefficients and high-frequency coefficients. Then the potential low-rank representation model is used to decompose low-frequency coefficients into basic sub-bands and salient sub-bands, in which the visual saliency map is taken as weighted coefficient. The weighted summation of low-frequency basic sub-bands is used as the fusion rule. The maximum absolute value of low-frequency salient sub-bands is also used as the fusion rule. The two fusion rules are superimposed to obtain low-frequency fusion coefficients. The intuitionistic fuzzy entropy is used as the fusion rule to measure the texture information and edge information of high-frequency coefficients. Finally, the infrared visible fusion image is obtained with the non-subsampled contour inverse transform. The comparison results on the objective and subjective evaluation of several sets of fusion images show that our image fusion method can effectively keep edge information and rich information on source images, thus producing better visual quality and objective evaluation than other image fusion methods.


Sign in / Sign up

Export Citation Format

Share Document