scholarly journals Bayesian Image Restoration Using a Large-Scale Total Patch Variation Prior

2011 ◽  
Vol 2011 ◽  
pp. 1-15 ◽  
Author(s):  
Yang Chen ◽  
Weimin Yu ◽  
Yinsheng Li ◽  
Zhou Yang ◽  
Limin Luo ◽  
...  

Edge-preserving Bayesian restorations using nonquadratic priors are often inefficient in restoring continuous variations and tend to produce block artifacts around edges in ill-posed inverse image restorations. To overcome this, we have proposed a spatial adaptive (SA) prior with improved performance. However, this SA prior restoration suffers from high computational cost and the unguaranteed convergence problem. Concerning these issues, this paper proposes a Large-scale Total Patch Variation (LS-TPV) Prior model for Bayesian image restoration. In this model, the prior for each pixel is defined as a singleton conditional probability, which is in a mixture prior form of one patch similarity prior and one weight entropy prior. A joint MAP estimation is thus built to ensure the iteration monotonicity. The intensive calculation of patch distances is greatly alleviated by the parallelization of Compute Unified Device Architecture(CUDA). Experiments with both simulated and real data validate the good performance of the proposed restoration.

Geophysics ◽  
2003 ◽  
Vol 68 (2) ◽  
pp. 577-588 ◽  
Author(s):  
Sergey Fomel ◽  
Jon F. Claerbout

Constraining ill‐posed inverse problems often requires regularized optimization. We consider two alternative approaches to regularization. The first approach involves a column operator and an extension of the data space. It requires a regularization operator which enhances the undesirable features of the model. The second approach constructs a row operator and expands the model space. It employs a preconditioning operator which enforces a desirable behavior (such as smoothness) of the model. In large‐scale problems, when iterative optimization is incomplete, the second method is preferable, because it often leads to faster convergence. We propose a method for constructing preconditioning operators by multidimensional recursive filtering. The recursive filters are constructed by imposing helical boundary conditions. Several examples with synthetic and real data demonstrate an order of magnitude efficiency gain achieved by applying the proposed technique to data interpolation problems.


Author(s):  
Jing Li ◽  
Xiaorun Li ◽  
Liaoying Zhao

The minimization problem of reconstruction error over large hyperspectral image data is one of the most important problems in unsupervised hyperspectral unmixing. A variety of algorithms based on nonnegative matrix factorization (NMF) have been proposed in the literature to solve this minimization problem. One popular optimization method for NMF is the projected gradient descent (PGD). However, as the algorithm must compute the full gradient on the entire dataset at every iteration, the PGD suffers from high computational cost in the large-scale real hyperspectral image. In this paper, we try to alleviate this problem by introducing a mini-batch gradient descent-based algorithm, which has been widely used in large-scale machine learning. In our method, the endmember can be updated pixel set by pixel set while abundance can be updated band set by band set. Thus, the computational cost is lowered to a certain extent. The performance of the proposed algorithm is quantified in the experiment on synthetic and real data.


2013 ◽  
Vol 401-403 ◽  
pp. 1397-1400
Author(s):  
Lei Zhang ◽  
Yue Yun Cao ◽  
Zi Chun Yang

Image restoration is a typical ill-posed inverse problem, which can be solved by a successful total least squares (TLS) method when not only the observation but the system matrix is also contaminated by addition noise. Considering the image restoration is a large-scale problem in general, project the TLS problem onto a subspace defined by a Lanczos bidiagonalization algorithm, and then the Truncated TLS method is applied on the subspace. Therefore, a novel iterative TTLS method, involving appropriate the choice of truncation parameter, is proposed. Finally, an Image reconstruction example is given to illustrate the effectiveness and robustness of proposed algorithm.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 381
Author(s):  
Pia Addabbo ◽  
Mario Luca Bernardi ◽  
Filippo Biondi ◽  
Marta Cimitile ◽  
Carmine Clemente ◽  
...  

The capability of sensors to identify individuals in a specific scenario is a topic of high relevance for sensitive sectors such as public security. A traditional approach involves cameras; however, camera-based surveillance systems lack discretion and have high computational and storing requirements in order to perform human identification. Moreover, they are strongly influenced by external factors (e.g., light and weather). This paper proposes an approach based on a temporal convolutional deep neural networks classifier applied to radar micro-Doppler signatures in order to identify individuals. Both sensor and processing requirements ensure a low size weight and power profile, enabling large scale deployment of discrete human identification systems. The proposed approach is assessed on real data concerning 106 individuals. The results show good accuracy of the classifier (the best obtained accuracy is 0.89 with an F1-score of 0.885) and improved performance when compared to other standard approaches.


2021 ◽  
Vol 647 ◽  
pp. L5
Author(s):  
B. Joachimi ◽  
F. Köhlinger ◽  
W. Handley ◽  
P. Lemos

Summary statistics of likelihood, such as Bayesian evidence, offer a principled way of comparing models and assessing tension between, or within, the results of physical experiments. Noisy realisations of the data induce scatter in these model comparison statistics. For a realistic case of cosmological inference from large-scale structure, we show that the logarithm of the Bayes factor attains scatter of order unity, increasing significantly with stronger tension between the models under comparison. We develop an approximate procedure that quantifies the sampling distribution of the evidence at a small additional computational cost and apply it to real data to demonstrate the impact of the scatter, which acts to reduce the significance of any model discrepancies. Data compression is highlighted as a potential avenue to suppressing noise in the evidence to negligible levels, with a proof of concept demonstrated using Planck cosmic microwave background data.


2021 ◽  
Author(s):  
Joel Rabelo ◽  
Yuri Saporito ◽  
Antonio Leitao

Abstract In this article we investigate a family of "stochastic gradient type methods", for solving systems of linear ill-posed equations. The method under consideration is a stochastic version of the projective Landweber-Kaczmarz (PLWK) method in [Leitão/Svaiter, Inv. Probl. 2016] (see also [Leitão/Svaiter, NFAO 2018]). In the case of exact data, mean square convergence to zero of the iteration error is proven. In the noise data case, we couple our method with an a priori stopping rule and characterize it as a regularization method for solving systems of linear ill-posed operator equations. Numerical tests are presented for two linear ill-posed problems: (i) a Hilbert matrix type system with over 10^8 equations; (ii) a Big Data linear regression problem with real data. The obtained results indicate superior performance of the proposed method when compared with other well established iterations. Our preliminary investigation indicates that the proposed iteration is a promising alternative for computing stable approximate solutions of large scale systems of linear ill-posed equations.


Geosciences ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 41
Author(s):  
Tim Jurisch ◽  
Stefan Cantré ◽  
Fokke Saathoff

A variety of studies recently proved the applicability of different dried, fine-grained dredged materials as replacement material for erosion-resistant sea dike covers. In Rostock, Germany, a large-scale field experiment was conducted, in which different dredged materials were tested with regard to installation technology, stability, turf development, infiltration, and erosion resistance. The infiltration experiments to study the development of a seepage line in the dike body showed unexpected measurement results. Due to the high complexity of the problem, standard geo-hydraulic models proved to be unable to analyze these results. Therefore, different methods of inverse infiltration modeling were applied, such as the parameter estimation tool (PEST) and the AMALGAM algorithm. In the paper, the two approaches are compared and discussed. A sensitivity analysis proved the presumption of a non-linear model behavior for the infiltration problem and the Eigenvalue ratio indicates that the dike infiltration is an ill-posed problem. Although this complicates the inverse modeling (e.g., termination in local minima), parameter sets close to an optimum were found with both the PEST and the AMALGAM algorithms. Together with the field measurement data, this information supports the rating of the effective material properties of the applied dredged materials used as dike cover material.


2020 ◽  
Author(s):  
Marco Bertoni ◽  
Stephen Gibbons ◽  
Olmo Silva

Abstract We study how demand responds to the rebranding of existing state schools as autonomous ‘academies’ in the context of a radical and large-scale reform to the English education system. The academy programme encouraged schools to opt out of local state control and funding, but provided parents and students with limited information on the expected benefits. We use administrative data on school applications for three cohorts of students to estimate whether this rebranding changes schools’ relative popularity. We find that families – particularly higher-income, White British – are more likely to rank converted schools above non-converted schools on their applications. We also find that it is mainly schools that are high-performing, popular and proximate to families’ homes that attract extra demand after conversion. Overall, the patterns we document suggest that families read academy conversion as a signal of future quality gains – although this signal is in part misleading as we find limited evidence that conversion causes improved performance.


Genetics ◽  
2003 ◽  
Vol 165 (4) ◽  
pp. 2269-2282
Author(s):  
D Mester ◽  
Y Ronin ◽  
D Minkov ◽  
E Nevo ◽  
A Korol

Abstract This article is devoted to the problem of ordering in linkage groups with many dozens or even hundreds of markers. The ordering problem belongs to the field of discrete optimization on a set of all possible orders, amounting to n!/2 for n loci; hence it is considered an NP-hard problem. Several authors attempted to employ the methods developed in the well-known traveling salesman problem (TSP) for multilocus ordering, using the assumption that for a set of linked loci the true order will be the one that minimizes the total length of the linkage group. A novel, fast, and reliable algorithm developed for the TSP and based on evolution-strategy discrete optimization was applied in this study for multilocus ordering on the basis of pairwise recombination frequencies. The quality of derived maps under various complications (dominant vs. codominant markers, marker misclassification, negative and positive interference, and missing data) was analyzed using simulated data with ∼50-400 markers. High performance of the employed algorithm allows systematic treatment of the problem of verification of the obtained multilocus orders on the basis of computing-intensive bootstrap and/or jackknife approaches for detecting and removing questionable marker scores, thereby stabilizing the resulting maps. Parallel calculation technology can easily be adopted for further acceleration of the proposed algorithm. Real data analysis (on maize chromosome 1 with 230 markers) is provided to illustrate the proposed methodology.


Sign in / Sign up

Export Citation Format

Share Document