Deep Convolutional Neural Network Based on Multi-Scale Feature Extraction for Image Denoising

Author(s):  
Jing Zhang ◽  
Liu Sang ◽  
Zekang Wan ◽  
Yuchen Wang ◽  
Yunsong Li
Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 319
Author(s):  
Yi Wang ◽  
Xiao Song ◽  
Guanghong Gong ◽  
Ni Li

Due to the rapid development of deep learning and artificial intelligence techniques, denoising via neural networks has drawn great attention due to their flexibility and excellent performances. However, for most convolutional network denoising methods, the convolution kernel is only one layer deep, and features of distinct scales are neglected. Moreover, in the convolution operation, all channels are treated equally; the relationships of channels are not considered. In this paper, we propose a multi-scale feature extraction-based normalized attention neural network (MFENANN) for image denoising. In MFENANN, we define a multi-scale feature extraction block to extract and combine features at distinct scales of the noisy image. In addition, we propose a normalized attention network (NAN) to learn the relationships between channels, which smooths the optimization landscape and speeds up the convergence process for training an attention model. Moreover, we introduce the NAN to convolutional network denoising, in which each channel gets gain; channels can play different roles in the subsequent convolution. To testify the effectiveness of the proposed MFENANN, we used both grayscale and color image sets whose noise levels ranged from 0 to 75 to do the experiments. The experimental results show that compared with some state-of-the-art denoising methods, the restored images of MFENANN have larger peak signal-to-noise ratios (PSNR) and structural similarity index measure (SSIM) values and get better overall appearance.


2021 ◽  
pp. 1-10
Author(s):  
Chien-Cheng Leea ◽  
Zhongjian Gao ◽  
Xiu-Chi Huanga

This paper proposes a Wi-Fi-based indoor human detection system using a deep convolutional neural network. The system detects different human states in various situations, including different environments and propagation paths. The main improvements proposed by the system is that there is no cameras overhead and no sensors are mounted. This system captures useful amplitude information from the channel state information and converts this information into an image-like two-dimensional matrix. Next, the two-dimensional matrix is used as an input to a deep convolutional neural network (CNN) to distinguish human states. In this work, a deep residual network (ResNet) architecture is used to perform human state classification with hierarchical topological feature extraction. Several combinations of datasets for different environments and propagation paths are used in this study. ResNet’s powerful inference simplifies feature extraction and improves the accuracy of human state classification. The experimental results show that the fine-tuned ResNet-18 model has good performance in indoor human detection, including people not present, people still, and people moving. Compared with traditional machine learning using handcrafted features, this method is simple and effective.


2021 ◽  
Vol 21 (01) ◽  
pp. 2150005
Author(s):  
ARUN T NAIR ◽  
K. MUTHUVEL

Nowadays, analysis on retinal image exists as one of the challenging area for study. Numerous retinal diseases could be recognized by analyzing the variations taking place in retina. However, the main disadvantage among those studies is that, they do not have higher recognition accuracy. The proposed framework includes four phases namely, (i) Blood Vessel Segmentation (ii) Feature Extraction (iii) Optimal Feature Selection and (iv) Classification. Initially, the input fundus image is subjected to blood vessel segmentation from which two binary thresholded images (one from High Pass Filter (HPF) and other from top-hat reconstruction) are acquired. These two images are differentiated and the areas that are common to both are said to be the major vessels and the left over regions are fused to form vessel sub-image. These vessel sub-images are classified with Gaussian Mixture Model (GMM) classifier and the resultant is summed up with the major vessels to form the segmented blood vessels. The segmented images are subjected to feature extraction process, where the features like proposed Local Binary Pattern (LBP), Gray-Level Co-Occurrence Matrix (GLCM) and Gray Level Run Length Matrix (GLRM) are extracted. As the curse of dimensionality seems to be the greatest issue, it is important to select the appropriate features from the extracted one for classification. In this paper, a new improved optimization algorithm Moth Flame with New Distance Formulation (MF-NDF) is introduced for selecting the optimal features. Finally, the selected optimal features are subjected to Deep Convolutional Neural Network (DCNN) model for classification. Further, in order to make the precise diagnosis, the weights of DCNN are optimally tuned by the same optimization algorithm. The performance of the proposed algorithm will be compared against the conventional algorithms in terms of positive and negative measures.


Sign in / Sign up

Export Citation Format

Share Document