scholarly journals GPR B-Scan Image Denoising via Multi-Scale Convolutional Autoencoder with Data Augmentation

Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1269
Author(s):  
Jiabin Luo ◽  
Wentai Lei ◽  
Feifei Hou ◽  
Chenghao Wang ◽  
Qiang Ren ◽  
...  

Ground-penetrating radar (GPR), as a non-invasive instrument, has been widely used in civil engineering. In GPR B-scan images, there may exist random noise due to the influence of the environment and equipment hardware, which complicates the interpretability of the useful information. Many methods have been proposed to eliminate or suppress the random noise. However, the existing methods have an unsatisfactory denoising effect when the image is severely contaminated by random noise. This paper proposes a multi-scale convolutional autoencoder (MCAE) to denoise GPR data. At the same time, to solve the problem of training dataset insufficiency, we designed the data augmentation strategy, Wasserstein generative adversarial network (WGAN), to increase the training dataset of MCAE. Experimental results conducted on both simulated, generated, and field datasets demonstrated that the proposed scheme has promising performance for image denoising. In terms of three indexes: the peak signal-to-noise ratio (PSNR), the time cost, and the structural similarity index (SSIM), the proposed scheme can achieve better performance of random noise suppression compared with the state-of-the-art competing methods (e.g., CAE, BM3D, WNNM).

Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 319
Author(s):  
Yi Wang ◽  
Xiao Song ◽  
Guanghong Gong ◽  
Ni Li

Due to the rapid development of deep learning and artificial intelligence techniques, denoising via neural networks has drawn great attention due to their flexibility and excellent performances. However, for most convolutional network denoising methods, the convolution kernel is only one layer deep, and features of distinct scales are neglected. Moreover, in the convolution operation, all channels are treated equally; the relationships of channels are not considered. In this paper, we propose a multi-scale feature extraction-based normalized attention neural network (MFENANN) for image denoising. In MFENANN, we define a multi-scale feature extraction block to extract and combine features at distinct scales of the noisy image. In addition, we propose a normalized attention network (NAN) to learn the relationships between channels, which smooths the optimization landscape and speeds up the convergence process for training an attention model. Moreover, we introduce the NAN to convolutional network denoising, in which each channel gets gain; channels can play different roles in the subsequent convolution. To testify the effectiveness of the proposed MFENANN, we used both grayscale and color image sets whose noise levels ranged from 0 to 75 to do the experiments. The experimental results show that compared with some state-of-the-art denoising methods, the restored images of MFENANN have larger peak signal-to-noise ratios (PSNR) and structural similarity index measure (SSIM) values and get better overall appearance.


2021 ◽  
Vol 1 (2(57)) ◽  
pp. 6-11
Author(s):  
Oleg Vasylchenkov ◽  
Igor Liberg ◽  
Mykhailo Mozhaiev ◽  
Dmytro Salnikov

The object of research is the adaptive switching weighted median image filter (ASWM) algorithm. This algorithm is one of the most effective in the field of impulse noise suppression. The computational complexity and algorithmic features of this adaptive nonlinear filter make it impossible to implement a filter that works in real time on modern PLD microcircuits. The most problematic areas of the algorithm are the weight coefficient estimation cycle, which has no limit on the number of iterations and contains a large number of division operations. This does not allow implementing the filter on PLDs with a sufficiently effective method. In the course of the research, the programming model of the filter in Python was used. The performance of the algorithm was assessed using the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) metrics. Modeling made it possible to find out empirically the number of iterations of the cycle for estimating the weight coefficients at different levels of noise density and to estimate the effect of artificial limitation of the maximum number of iterations on the filter performance. Regardless of the intensity of the noise impact, the algorithm performs less than 40 iterations of the evaluation cycle. Let’s also simulate the operation of the algorithm with different variants of the division module implementation. The paper considers the main of them and offers the most optimal in terms of the ratio of accuracy/hardware costs for implementation. Thus, a modified algorithm was proposed that does not have these disadvantages. Thanks to modifications of the algorithm, it is possible to implement a pipelined ASWM image filter on modern PLDs. The filter is synthesized for the main families of Intel PLDs. The implementation, which is not inferior in terms of SSIM and PSNR metrics to the original algorithm, requires less than 65,000 FPGA logical cells and allows filtering of monochrome images with FullHD resolution at 48 frames/s at a clock frequency of 100 MHz.


2017 ◽  
pp. 711-723
Author(s):  
Vikrant Bhateja ◽  
Abhinav Krishn ◽  
Himanshi Patel ◽  
Akanksha Sahu

Medical image fusion facilitates the retrieval of complementary information from medical images and has been employed diversely for computer-aided diagnosis of life threatening diseases. Fusion has been performed using various approaches such as Pyramidal, Multi-resolution, multi-scale etc. Each and every approach of fusion depicts only a particular feature (i.e. the information content or the structural properties of an image). Therefore, this paper presents a comparative analysis and evaluation of multi-modal medical image fusion methodologies employing wavelet as a multi-resolution approach and ridgelet as a multi-scale approach. The current work tends to highlight upon the utility of these approaches according to the requirement of features in the fused image. Principal Component Analysis (PCA) based fusion algorithm has been employed in both ridgelet and wavelet domains for purpose of minimisation of redundancies. Simulations have been performed for different sets of MR and CT-scan images taken from ‘The Whole Brain Atlas'. The performance evaluation has been carried out using different parameters of image quality evaluation like: Entropy (E), Fusion Factor (FF), Structural Similarity Index (SSIM) and Edge Strength (QFAB). The outcome of this analysis highlights the trade-off between the retrieval of information content and the morphological details in finally fused image in wavelet and ridgelet domains.


Author(s):  
Shenghan Mei ◽  
Xiaochun Liu ◽  
Shuli Mei

The locust slice images have all the features such as strong self-similarity, piecewise smoothness and nonlinear texture structure. Multi-scale interpolation operator is an effective tool to describe such structures, but it cannot overcome the influence of noise on images. Therefore, this research designed the Shannon–Cosine wavelet which possesses all the excellent properties such as interpolation, smoothness, compact support and normalization, then constructing multi-scale wavelet interpolative operator, the operator can be applied to decompose and reconstruct the images adaptively. Combining the operator with the local filter operator (mean and median), a multi-scale Shannon–Cosine wavelet denoising algorithm based on cell filtering is constructed in this research. The algorithm overcomes the disadvantages of multi-scale interpolation wavelet, which is only suitable for describing smooth signals, and realizes multi-scale noise reduction of locust slice images. The experimental results show that the proposed method can keep all kinds of texture structures in the slice image of locust. In the experiments, the locust slice images with mixture noise of Gaussian and salt–pepper are taken as examples to compare the performances of the proposed method and other typical denoising methods. The experimental results show that the Peak Signal-To-Noise Ratio (PSNR) of the denoised images obtained by the proposed method is greater 27.3%, 24.6%, 2.94%, 22.9% than Weiner filter, wavelet transform method, median and average filtering, respectively; and the Structural Similarity Index (SSIM) for measuring image quality is greater 31.1%, 31.3%, 15.5%, 10.2% than other four methods, respectively. As the variance of Gaussian white noise increases from 0.02 to 0.1, the values of PSNR and SSIM obtained by the proposed method only decrease by 11.94% and 13.33%, respectively, which are much less than other 4 methods. This shows that the proposed method possesses stronger adaptability.


2020 ◽  
Vol 10 (1) ◽  
pp. 375 ◽  
Author(s):  
Zetao Jiang ◽  
Yongsong Huang ◽  
Lirui Hu

The super-resolution generative adversarial network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied by unpleasant artifacts. To further enhance the visual quality, we propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The method is based on depthwise separable convolution super-resolution generative adversarial network (DSCSRGAN). A new depthwise separable convolution dense block (DSC Dense Block) was designed for the generator network, which improved the ability to represent and extract image features, while greatly reducing the total amount of parameters. For the discriminator network, the batch normalization (BN) layer was discarded, and the problem of artifacts was reduced. A frequency energy similarity loss function was designed to constrain the generator network to generate better super-resolution images. Experiments on several different datasets showed that the peak signal-to-noise ratio (PSNR) was improved by more than 3 dB, structural similarity index (SSIM) was increased by 16%, and the total parameter was reduced to 42.8% compared with the original model. Combining various objective indicators and subjective visual evaluation, the algorithm was shown to generate richer image details, clearer texture, and lower complexity.


Author(s):  
Liqiong Zhang ◽  
Min Li ◽  
Xiaohua Qiu

To overcome the “staircase effect” while preserving the structural information such as image edges and textures quickly and effectively, we propose a compensating total variation image denoising model combining L1 and L2 norm. A new compensating regular term is designed, which can perform anisotropic and isotropic diffusion in image denoising, thus making up for insufficient diffusion in the total variation model. The algorithm first uses local standard deviation to distinguish neighborhood types. Then, the anisotropic diffusion based on L1 norm plays the role of edge protection in the strong edge region. The anisotropic and the isotropic diffusion simultaneously exist in the smooth region, so that the weak textures can be protected while overcoming the “staircase effect” effectively. The simulation experiments show that this method can effectively improve the peak signal-to-noise ratio and obtain the higher structural similarity index and the shorter running time.


2020 ◽  
Vol 11 ◽  
Author(s):  
Luning Bi ◽  
Guiping Hu

Traditionally, plant disease recognition has mainly been done visually by human. It is often biased, time-consuming, and laborious. Machine learning methods based on plant leave images have been proposed to improve the disease recognition process. Convolutional neural networks (CNNs) have been adopted and proven to be very effective. Despite the good classification accuracy achieved by CNNs, the issue of limited training data remains. In most cases, the training dataset is often small due to significant effort in data collection and annotation. In this case, CNN methods tend to have the overfitting problem. In this paper, Wasserstein generative adversarial network with gradient penalty (WGAN-GP) is combined with label smoothing regularization (LSR) to improve the prediction accuracy and address the overfitting problem under limited training data. Experiments show that the proposed WGAN-GP enhanced classification method can improve the overall classification accuracy of plant diseases by 24.4% as compared to 20.2% using classic data augmentation and 22% using synthetic samples without LSR.


2019 ◽  
pp. 22-28
Author(s):  
Suzan J Obaiys ◽  
Hamid A Jalab ◽  
Rabha W Ibrahim

The use of local fractional calculus has increased in different applications of image processing. This study proposes a new algorithm for image denoising to remove Gaussian noise in digital images. The proposed algorithm is based on local fractional integral of Chebyshev polynomials. The proposed structures of the local fractional windows are obtained by four masks created for x and y directions. On four directions, a convolution product of the input image pixels with the local fractional mask window has been performed. The visual perception and peak signal-to-noise ratio (PSNR) with the structural similarity index (SSIM) are used as image quality measurements. The experiments proved that the accomplished filtering results are better than the Gaussian filter. Keywords: local fractional; Chebyshev polynomials; Image denoising


2019 ◽  
Vol 28 (1) ◽  
pp. 25-34
Author(s):  
Grzegorz Wieczorek ◽  
Izabella Antoniuk ◽  
Michał Kruk ◽  
Jarosław Kurek ◽  
Arkadiusz Orłowski ◽  
...  

In this paper we present a new segmentation method meant for boost area that remains after removing the tumour using BCT (breast conserving therapy). The selected area is a region on which radiation treatment will later be made. Consequently, an inaccurate designation of this region can result in a treatment missing its target or focusing on healthy breast tissue that otherwise could be spared. Needless to say that exact indication of boost area is an extremely important aspect of the entire medical procedure, where a better definition can lead to optimizing of the coverage of the target volume and, in result, can save normal breast tissue. Precise definition of this area has a potential to both improve the local control of the disease and to ensure better cosmetic outcome for the patient. In our approach we use U-net along with Keras and TensorFlow systems to tailor a precise solution for the indication of the boost area. During the training process we utilize a set of CT images, where each of them came with a contour assigned by an expert. We wanted to achieve a segmentation result as close to given contour as possible. With a rather small initial data set we used data augmentation techniques to increase the number of training examples, while the final outcomes were evaluated according to their similarity to the ones produced by experts, by calculating the mean square error and the structural similarity index (SSIM).


Sign in / Sign up

Export Citation Format

Share Document