scholarly journals Boosting of Denoising Effect with Fusion Strategy

2020 ◽  
Vol 10 (11) ◽  
pp. 3857
Author(s):  
Fangjia Yang ◽  
Shaoping Xu ◽  
Chongxi Li

Image denoising, a fundamental step in image processing, has been widely studied for several decades. Denoising methods can be classified as internal or external depending on whether they exploit the internal prior or the external noisy-clean image priors to reconstruct a latent image. Typically, these two kinds of methods have their respective merits and demerits. Using a single denoising model to improve existing methods remains a challenge. In this paper, we propose a method for boosting the denoising effect via the image fusion strategy. This study aims to boost the performance of two typical denoising methods, the nonlocally centralized sparse representation (NCSR) and residual learning of deep CNN (DnCNN). These two methods have complementary strengths and can be chosen to represent internal and external denoising methods, respectively. The boosting process is formulated as an adaptive weight-based image fusion problem by preserving the details for the initial denoised images output by the NCSR and the DnCNN. Specifically, we design two kinds of weights to adaptively reflect the influence of the pixel intensity changes and the global gradient of the initial denoised images. A linear combination of these two kinds of weights determines the final weight. The initial denoised images are integrated into the fusion framework to achieve our denoising results. Extensive experiments show that the proposed method significantly outperforms the NCSR and the DnCNN both quantitatively and visually when they are considered as individual methods; similarly, it outperforms several other state-of-the-art denoising methods.

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Chuangeng Tian ◽  
Lu Tang ◽  
Xiao Li ◽  
Kaili Liu ◽  
Jian Wang

This paper proposes a perceptual medical image fusion framework based on morphological component analysis combining convolutional sparsity and pulse-coupled neural network, which is called MCA-CS-PCNN for short. Source images are first decomposed into cartoon components and texture components by morphological component analysis, and a convolutional sparse representation of cartoon layers and texture layers is produced by prelearned dictionaries. Then, convolutional sparsity is used as a stimulus to motivate the PCNN for dealing with cartoon layers and texture layers. Finally, the medical fused image is computed via combining fused cartoon layers and texture layers. Experimental results verify that the MCA-CS-PCNN model is superior to the state-of-the-art fusion strategy.


Image fusion has been performed and reported in this paper for multi-focused images using Frequency Partition Discrete Cosine Transform (FP-DCT) with Modified Principal component analysis (MPCA) technique. The image fusion with decomposition at fixed levels may be treated as a very critical rule in the earlier image processing techniques. The frequency partitioning approach was used in this study to select the decomposition levels based on the pixel intensity and clarity. This paper also presents the modified PCA technique which provides dimensionality reduction. The wide range of quality evaluation metrics was computed to compare the fusion performance on the five images. Different techniques such as PCA, wavelet transforms with PCA, Multiresolution Singular Value Decomposition (MSVD) with PCA, Multiresolution DCT (MRDCT) with PCA, Frequency partitioning DCT (FP-DCT) with PCA were computed for comparison with the proposed FP-DCT Modified PCA (MPCA) technique. Images obtained after fusion process obtained by the method proposed shows enhanced visual quality, negligible information loss and discontinuities in the image than compared to other state of the art methods.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Lei Yan ◽  
Qun Hao ◽  
Jie Cao ◽  
Rizvi Saad ◽  
Kun Li ◽  
...  

AbstractImage fusion integrates information from multiple images (of the same scene) to generate a (more informative) composite image suitable for human and computer vision perception. The method based on multiscale decomposition is one of the commonly fusion methods. In this study, a new fusion framework based on the octave Gaussian pyramid principle is proposed. In comparison with conventional multiscale decomposition, the proposed octave Gaussian pyramid framework retrieves more information by decomposing an image into two scale spaces (octave and interval spaces). Different from traditional multiscale decomposition with one set of detail and base layers, the proposed method decomposes an image into multiple sets of detail and base layers, and it efficiently retains high- and low-frequency information from the original image. The qualitative and quantitative comparison with five existing methods (on publicly available image databases) demonstrate that the proposed method has better visual effects and scores the highest in objective evaluation.


2017 ◽  
Vol 9 (4) ◽  
pp. 61 ◽  
Author(s):  
Guanqiu Qi ◽  
Jinchuan Wang ◽  
Qiong Zhang ◽  
Fancheng Zeng ◽  
Zhiqin Zhu

Author(s):  
M. Rothermel ◽  
N. Haala ◽  
D. Fritsch

Due to good scalability, systems for image-based dense surface reconstruction often employ stereo or multi-baseline stereo methods. These types of algorithms represent the scene by a set of depth or disparity maps which eventually have to be fused to extract a consistent, non-redundant surface representation. Generally the single depth observations across the maps possess variances in quality. Within the fusion process not only preservation of precision and detail but also density and robustness with respect to outliers are desirable. Being prune to outliers, in this article we propose a local median-based algorithm for the fusion of depth maps eventually representing the scene as a set of oriented points. Paying respect to scalability, points induced by each of the available depth maps are streamed to cubic tiles which then can be filtered in parallel. Arguing that the triangulation uncertainty is larger in the direction of image rays we define these rays as the main filter direction. Within an additional strategy we define the surface normals as the principle direction for median filtering/integration. The presented approach is straight-forward to implement since employing standard oc- and kd-tree structures enhanced by nearest neighbor queries optimized for cylindrical neighborhoods. We show that the presented method in combination with the MVS (Rothermel et al., 2012) produces surfaces comparable to the results of the Middlebury MVS benchmark and favorably compares to an state-of-the-art algorithm employing the Fountain dataset (Strecha et al., 2008). Moreover, we demonstrate its capability of depth map fusion for city scale reconstructions derived from large frame airborne imagery.


2020 ◽  
Vol 34 (07) ◽  
pp. 11061-11068 ◽  
Author(s):  
Weiting Huang ◽  
Pengfei Ren ◽  
Jingyu Wang ◽  
Qi Qi ◽  
Haifeng Sun

In this paper, we propose an adaptive weighting regression (AWR) method to leverage the advantages of both detection-based and regression-based method. Hand joint coordinates are estimated as discrete integration of all pixels in dense representation, guided by adaptive weight maps. This learnable aggregation process introduces both dense and joint supervision that allows end-to-end training and brings adaptability to weight maps, making network more accurate and robust. Comprehensive exploration experiments are conducted to validate the effectiveness and generality of AWR under various experimental settings, especially its usefulness for different types of dense representation and input modality. Our method outperforms other state-of-the-art methods on four publicly available datasets, including NYU, ICVL, MSRA and HANDS 2017 dataset.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 98290-98305 ◽  
Author(s):  
Li Yin ◽  
Mingyao Zheng ◽  
Guanqiu Qi ◽  
Zhiqin Zhu ◽  
Fu Jin ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Ahmed Jawad A. AlBdairi ◽  
Zhu Xiao ◽  
Mohammed Alghaili

The interest in face recognition studies has grown rapidly in the last decade. One of the most important problems in face recognition is the identification of ethnics of people. In this study, a new deep learning convolutional neural network is designed to create a new model that can recognize the ethnics of people through their facial features. The new dataset for ethnics of people consists of 3141 images collected from three different nationalities. To the best of our knowledge, this is the first image dataset collected for the ethnics of people and that dataset will be available for the research community. The new model was compared with two state-of-the-art models, VGG and Inception V3, and the validation accuracy was calculated for each convolutional neural network. The generated models have been tested through several images of people, and the results show that the best performance was achieved by our model with a verification accuracy of 96.9%.


Sign in / Sign up

Export Citation Format

Share Document