noisy observation
Recently Published Documents





2021 ◽  
pp. 1-17
Joshua L. Vincent ◽  
Ramon Manzorro ◽  
Sreyas Mohan ◽  
Binh Tang ◽  
Dev Y. Sheth ◽  

A deep convolutional neural network has been developed to denoise atomic-resolution transmission electron microscope image datasets of nanoparticles acquired using direct electron counting detectors, for applications where the image signal is severely limited by shot noise. The network was applied to a model system of CeO2-supported Pt nanoparticles. We leverage multislice image simulations to generate a large and flexible dataset for training the network. The proposed network outperforms state-of-the-art denoising methods on both simulated and experimental test data. Factors contributing to the performance are identified, including (a) the geometry of the images used during training and (b) the size of the network's receptive field. Through a gradient-based analysis, we investigate the mechanisms learned by the network to denoise experimental images. This shows that the network exploits both extended and local information in the noisy measurements, for example, by adapting its filtering approach when it encounters atomic-level defects at the nanoparticle surface. Extensive analysis has been done to characterize the network's ability to correctly predict the exact atomic structure at the nanoparticle surface. Finally, we develop an approach based on the log-likelihood ratio test that provides a quantitative measure of the agreement between the noisy observation and the atomic-level structure in the network-denoised image.

Oryiema Robert ◽  
David Angwenyi ◽  
Kevin Midenyo

There are several functional forms for non-linear dynamical filters. Extended Kalman filters are algorithms that are used to estimate more accurate values of unknown quantities of internal dynamical systems from a sequence of noisy observation measured over a period of time. This filtering process becomes computationally expensive when subjected to high dimensional data which consequently has a negative impact on the filter performance in real time. This is because integration of the equation of evolution of covariances is extremely costly, especially when the dimension of the problem is huge which is the case in numerical weather prediction.This study has developed a new filter, the First order Extended Ensemble Filter (FoEEF), with a new extended innovation process to improve on the measurement and be able to estimate the state value of high dimensional data. We propose to estimate the covariances empirically, which lends the filter amenable to large dimensional models. The new filter is derived from stochastic state-space models and its performance is tested using Lorenz 63 system of ordinary differential equations and Matlab software.The performance of the newly developed filter is then compared with the performances of three other filters, that is, Bootstrap particle Filter (BPF), First order Extended Kalman Bucy Filter (FoEKBF) and Second order Extended Kalman Bucy Filter (SoEKBF).The performance of the FoEEF improves with the increase in ensemble size. Even with as low number of ensembles as 40, the FoEEF performs as good as the FoEKBF and SoEKBF. This shows, that the proposed filter can register a good performance when used in high-dimensional state-space models.

Geophysics ◽  
2021 ◽  
pp. 1-96
Yapo Abolé Serge Innocent Oboué ◽  
Yangkang Chen

Noise and missing traces usually influence the quality of multidimensional seismic data. It is, therefore, necessary to e stimate the useful signal from its noisy observation. The damped rank-reduction (DRR) method has emerged as an effective method to reconstruct the useful signal matrix from its noisy observation. However, the higher the noise level and the ratio of missing traces, the weaker the DRR operator becomes. Consequently, the estimated low-rank signal matrix includes a unignorable amount of residual noise that influences the next processing steps. This paper focuses on the problem of estimating a low-rank signal matrix from its noisy observation. To elaborate on the novel algorithm, we formulate an improved proximity function by mixing the moving-average filter and the arctangent penalty function. We first apply the proximity function to the level-4 block Hankel matrix before the singular value decomposition (SVD), and then, to singular values, during the damped truncated SVD process. The relationship between the novel proximity function and the DRR framework leads to an optimization problem, which results in better recovery performance. The proposed algorithm aims at producing an enhanced rank-reduction operator to estimate the useful signal matrix with a higher quality. Experiments are conducted on synthetic and real 5-D seismic data to compare the effectiveness of our approach to the DRR approach. The proposed approach is shown to obtain better performance since the estimated low-rank signal matrix is cleaner and contains less amount of artifacts compared to the DRR algorithm.

2021 ◽  
Vol 11 (7) ◽  
pp. 2976
Fengfan Qin ◽  
Hui Feng ◽  
Tao Yang ◽  
Bo Hu

Consider the problem of detecting anomalies among multiple stochastic processes. Each anomaly incurs a cost per unit time until it is identified. Due to the resource constraints, the decision-maker can select one process to probe and obtain a noisy observation. Each observation and switching across processes accompany a certain time delay. Our objective is to find a sequential inference strategy that minimizes the expected cumulative cost incurred by all the anomalies during the entire detection procedure under the error constraints. We develop a deterministic policy to solve the problem within the framework of the active hypothesis testing model. We prove that the proposed algorithm is asymptotic optimal in terms of minimizing the expected cumulative costs when the ratio of the single-switching delay to the single-observation delay is much smaller than the declaration threshold and is order-optimal when the ratio is comparable to the threshold. Not only is the proposed policy optimal in the asymptotic regime, but numerical simulations also demonstrate its excellent performance in the finite regime.

2020 ◽  
Vol 29 (06) ◽  
pp. 2050018
A. Belcaid ◽  
M. Douimi

In this paper, we focus on the problem of change point detection in piecewise constant signals. This problem is central to several applications such as human activity analysis, speech or image analysis and anomaly detection in genetics. We present a novel window-sliding algorithm for an online change point detection. The proposed approach considers a local blanket of a global Markov Random Field (MRF) representing the signal and its noisy observation. For each window, we define and solve the local energy minimization problem to deduce the gradient on each edge of the MRF graph. The gradient is then processed by an activation function to filter the weak features and produce the final jumps. We demonstrate the effectiveness of our method by comparing its running time and several detection metrics with state of the art algorithms.

2020 ◽  
Vol 32 (9) ◽  
pp. 1733-1773
Yuko Kuroki ◽  
Liyuan Xu ◽  
Atsushi Miyauchi ◽  
Junya Honda ◽  
Masashi Sugiyama

We study the problem of stochastic multiple-arm identification, where an agent sequentially explores a size-[Formula: see text] subset of arms (also known as a super arm) from given [Formula: see text] arms and tries to identify the best super arm. Most work so far has considered the semi-bandit setting, where the agent can observe the reward of each pulled arm or assumed each arm can be queried at each round. However, in real-world applications, it is costly or sometimes impossible to observe a reward of individual arms. In this study, we tackle the full-bandit setting, where only a noisy observation of the total sum of a super arm is given at each pull. Although our problem can be regarded as an instance of the best arm identification in linear bandits, a naive approach based on linear bandits is computationally infeasible since the number of super arms [Formula: see text] is exponential. To cope with this problem, we first design a polynomial-time approximation algorithm for a 0-1 quadratic programming problem arising in confidence ellipsoid maximization. Based on our approximation algorithm, we propose a bandit algorithm whose computation time is [Formula: see text](log [Formula: see text]), thereby achieving an exponential speedup over linear bandit algorithms. We provide a sample complexity upper bound that is still worst-case optimal. Finally, we conduct experiments on large-scale data sets with more than 10[Formula: see text] super arms, demonstrating the superiority of our algorithms in terms of both the computation time and the sample complexity.

Reinhard Heckel ◽  
Wen Huang ◽  
Paul Hand ◽  
Vladislav Voroninski

Abstract Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy observation. The underlying principle is that neural networks trained on large data sets have empirically been shown to be able to generate natural images well from a low-dimensional latent representation of the image. Given such a generator network, a noisy image can be denoised by (i) finding the closest image in the range of the generator or by (ii) passing it through an encoder-generator architecture (known as an autoencoder). However, there is little theory to justify this success, let alone to predict the denoising performance as a function of the network parameters. In this paper, we consider the problem of denoising an image from additive Gaussian noise using the two generator-based approaches. In both cases, we assume the image is well described by a deep neural network with ReLU activations functions, mapping a $k$-dimensional code to an $n$-dimensional image. In the case of the autoencoder, we show that the feedforward network reduces noise energy by a factor of $O(k/n)$. In the case of optimizing over the range of a generative model, we state and analyze a simple gradient algorithm that minimizes a non-convex loss function and provably reduces noise energy by a factor of $O(k/n)$. We also demonstrate in numerical experiments that this denoising performance is, indeed, achieved by generative priors learned from data.

Shreya Shrikant Naik ◽  
Ms Sowmya ◽  
Preethika N K

Image is the object that stores and reflects visual perception. Images are also important information carriers today. Acquisition channel and artificial editing are the two main ways that corrupt observed images. The goal of image restoration technique is to restore the original image from a noisy observation of it which is aiming to reconstruct a high quality image from its low quality observation has many important applications, like low-level image processing, medical imaging, remote sensing, surveillance, etc. Image denoising is common image restoration problems that are useful by many industrial and scientific applications. The application classifies images based on single image selected from user. The noise from the corrupted image is removed and original clear image is obtained. In our project we are making use of Auto-encoder. Auto-encoder do not need much data pre-processing and it is an end to end training process which helps to remove the noise present in some pictures using some data compression algorithms.

2020 ◽  
Vol 34 (04) ◽  
pp. 4140-4149
Zhiwei Hong ◽  
Xiaocheng Fan ◽  
Tao Jiang ◽  
Jianxing Feng

Image denoising is a classic low level vision problem that attempts to recover a noise-free image from a noisy observation. Recent advances in deep neural networks have outperformed traditional prior based methods for image denoising. However, the existing methods either require paired noisy and clean images for training or impose certain assumptions on the noise distribution and data types. In this paper, we present an end-to-end unpaired image denoising framework (UIDNet) that denoises images with only unpaired clean and noisy training images. The critical component of our model is a noise learning module based on a conditional Generative Adversarial Network (cGAN). The model learns the noise distribution from the input noisy images and uses it to transform the input clean images to noisy ones without any assumption on the noise distribution and data types. This process results in pairs of clean and pseudo-noisy images. Such pairs are then used to train another denoising network similar to the existing denoising methods based on paired images. The noise learning and denoising components are integrated together so that they can be trained end-to-end. Extensive experimental evaluation has been performed on both synthetic and real data including real photographs and computer tomography (CT) images. The results demonstrate that our model outperforms the previous models trained on unpaired images as well as the state-of-the-art methods based on paired training data when proper training pairs are unavailable.

Sign in / Sign up

Export Citation Format

Share Document