scholarly journals Multidimensional recursive filter preconditioning in geophysical estimation problems

Geophysics ◽  
2003 ◽  
Vol 68 (2) ◽  
pp. 577-588 ◽  
Author(s):  
Sergey Fomel ◽  
Jon F. Claerbout

Constraining ill‐posed inverse problems often requires regularized optimization. We consider two alternative approaches to regularization. The first approach involves a column operator and an extension of the data space. It requires a regularization operator which enhances the undesirable features of the model. The second approach constructs a row operator and expands the model space. It employs a preconditioning operator which enforces a desirable behavior (such as smoothness) of the model. In large‐scale problems, when iterative optimization is incomplete, the second method is preferable, because it often leads to faster convergence. We propose a method for constructing preconditioning operators by multidimensional recursive filtering. The recursive filters are constructed by imposing helical boundary conditions. Several examples with synthetic and real data demonstrate an order of magnitude efficiency gain achieved by applying the proposed technique to data interpolation problems.

2011 ◽  
Vol 2011 ◽  
pp. 1-15 ◽  
Author(s):  
Yang Chen ◽  
Weimin Yu ◽  
Yinsheng Li ◽  
Zhou Yang ◽  
Limin Luo ◽  
...  

Edge-preserving Bayesian restorations using nonquadratic priors are often inefficient in restoring continuous variations and tend to produce block artifacts around edges in ill-posed inverse image restorations. To overcome this, we have proposed a spatial adaptive (SA) prior with improved performance. However, this SA prior restoration suffers from high computational cost and the unguaranteed convergence problem. Concerning these issues, this paper proposes a Large-scale Total Patch Variation (LS-TPV) Prior model for Bayesian image restoration. In this model, the prior for each pixel is defined as a singleton conditional probability, which is in a mixture prior form of one patch similarity prior and one weight entropy prior. A joint MAP estimation is thus built to ensure the iteration monotonicity. The intensive calculation of patch distances is greatly alleviated by the parallelization of Compute Unified Device Architecture(CUDA). Experiments with both simulated and real data validate the good performance of the proposed restoration.


Geophysics ◽  
2012 ◽  
Vol 77 (2) ◽  
pp. R117-R127 ◽  
Author(s):  
Antoine Guitton ◽  
Gboyega Ayeni ◽  
Esteban Díaz

The waveform inversion problem is inherently ill-posed. Traditionally, regularization schemes are used to address this issue. For waveform inversion, where the model is expected to have many details reflecting the physical properties of the Earth, regularization and data fitting can work in opposite directions: the former smoothing and the latter adding details to the model. We propose constraining estimated velocity fields by reparameterizing the model. This technique, also called model-space preconditioning, is based on directional Laplacian filters: It preserves most of the details of the velocity model while smoothing the solution along known geological dips. Preconditioning also yields faster convergence at early iterations. The Laplacian filters have the property to smooth or kill local planar events according to a local dip field. By construction, these filters can be inverted and used in a preconditioned waveform inversion strategy to yield geologically meaningful models. We illustrate with 2D synthetic and field data examples how preconditioning with nonstationary directional Laplacian filters outperforms traditional waveform inversion when sparse data are inverted and when sharp velocity contrasts are present. Adding geological information with preconditioning could benefit full-waveform inversion of real data whenever irregular geometry, coherent noise and lack of low frequencies are present.


Geophysics ◽  
2021 ◽  
pp. 1-41
Author(s):  
Nasser Kazemi ◽  
Mauricio D. Sacchi

The conventional Radon transform suffers from a lack of resolution when data kinematics and amplitudes differ from those of the Radon basis functions. Also, a limited aperture of data, missing traces, aliasing, a finite number of scanned ray parameters, noise, residual statics, and amplitude variations with offset (AVO) reduce the de-correlation power of the Radon basis functions. Posing Radon transform estimation as an inverse problem by searching for a sparse model that fits the data improves the performance of the algorithm. However, due to averaging along the offset axis, the conventional Radon transform cannot preserve AVO. Accordingly, we modify the Radon basis functions by extending the model domain along the offset direction. Extending the model space helps in fitting data; however, computing the offset-extended Radon transform is an under-determined and ill-posed problem. To alleviate this shortcoming, we add model domain sparsity and smoothing constraints to yield a stable solution. We develop an algorithm using offset-extended Radon basis functions with sparsity promoting in offset-stacked Radon images in conjunction with a smoothing restriction along the offset axis. As the inverted model is sparse and fits the data, muting common-offset Radon panels based on ray-parameter/curvature is sufficient for separating primaries from multiples. We successfully apply the algorithm to suppress multiples in the presence of strong AVO on synthetic data and a real data example from the Gulf of Mexico, Mississippi Canyon. The results show that extending the Radon model space is necessary for improving the separation and suppression of the multiples in the presence of strong AVO.


Blind deconvolution defined as simultaneous estimation and removal of blur is an ill-posed problem that can be solved with well-posed priors. In this paper we focus on directional edge prior based on orientation of gradients. Then the deconvolution problem is modeled as L2-regularized optimization problem which seeks a solution through constraint optimization. The constrained optimization problem is done in frequency domain with an Augmented Lagrangian Method (ALM). The proposed algorithm is tested on various synthetic as well as real data taken from various sources and the performance comparison is carried out with other state of the art existing methods.


2021 ◽  
Author(s):  
Joel Rabelo ◽  
Yuri Saporito ◽  
Antonio Leitao

Abstract In this article we investigate a family of "stochastic gradient type methods", for solving systems of linear ill-posed equations. The method under consideration is a stochastic version of the projective Landweber-Kaczmarz (PLWK) method in [Leitão/Svaiter, Inv. Probl. 2016] (see also [Leitão/Svaiter, NFAO 2018]). In the case of exact data, mean square convergence to zero of the iteration error is proven. In the noise data case, we couple our method with an a priori stopping rule and characterize it as a regularization method for solving systems of linear ill-posed operator equations. Numerical tests are presented for two linear ill-posed problems: (i) a Hilbert matrix type system with over 10^8 equations; (ii) a Big Data linear regression problem with real data. The obtained results indicate superior performance of the proposed method when compared with other well established iterations. Our preliminary investigation indicates that the proposed iteration is a promising alternative for computing stable approximate solutions of large scale systems of linear ill-posed equations.


Geosciences ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 41
Author(s):  
Tim Jurisch ◽  
Stefan Cantré ◽  
Fokke Saathoff

A variety of studies recently proved the applicability of different dried, fine-grained dredged materials as replacement material for erosion-resistant sea dike covers. In Rostock, Germany, a large-scale field experiment was conducted, in which different dredged materials were tested with regard to installation technology, stability, turf development, infiltration, and erosion resistance. The infiltration experiments to study the development of a seepage line in the dike body showed unexpected measurement results. Due to the high complexity of the problem, standard geo-hydraulic models proved to be unable to analyze these results. Therefore, different methods of inverse infiltration modeling were applied, such as the parameter estimation tool (PEST) and the AMALGAM algorithm. In the paper, the two approaches are compared and discussed. A sensitivity analysis proved the presumption of a non-linear model behavior for the infiltration problem and the Eigenvalue ratio indicates that the dike infiltration is an ill-posed problem. Although this complicates the inverse modeling (e.g., termination in local minima), parameter sets close to an optimum were found with both the PEST and the AMALGAM algorithms. Together with the field measurement data, this information supports the rating of the effective material properties of the applied dredged materials used as dike cover material.


2021 ◽  
Author(s):  
Parsoa Khorsand ◽  
Fereydoun Hormozdiari

Abstract Large scale catalogs of common genetic variants (including indels and structural variants) are being created using data from second and third generation whole-genome sequencing technologies. However, the genotyping of these variants in newly sequenced samples is a nontrivial task that requires extensive computational resources. Furthermore, current approaches are mostly limited to only specific types of variants and are generally prone to various errors and ambiguities when genotyping complex events. We are proposing an ultra-efficient approach for genotyping any type of structural variation that is not limited by the shortcomings and complexities of current mapping-based approaches. Our method Nebula utilizes the changes in the count of k-mers to predict the genotype of structural variants. We have shown that not only Nebula is an order of magnitude faster than mapping based approaches for genotyping structural variants, but also has comparable accuracy to state-of-the-art approaches. Furthermore, Nebula is a generic framework not limited to any specific type of event. Nebula is publicly available at https://github.com/Parsoa/Nebula.


Genetics ◽  
2003 ◽  
Vol 165 (4) ◽  
pp. 2269-2282
Author(s):  
D Mester ◽  
Y Ronin ◽  
D Minkov ◽  
E Nevo ◽  
A Korol

Abstract This article is devoted to the problem of ordering in linkage groups with many dozens or even hundreds of markers. The ordering problem belongs to the field of discrete optimization on a set of all possible orders, amounting to n!/2 for n loci; hence it is considered an NP-hard problem. Several authors attempted to employ the methods developed in the well-known traveling salesman problem (TSP) for multilocus ordering, using the assumption that for a set of linked loci the true order will be the one that minimizes the total length of the linkage group. A novel, fast, and reliable algorithm developed for the TSP and based on evolution-strategy discrete optimization was applied in this study for multilocus ordering on the basis of pairwise recombination frequencies. The quality of derived maps under various complications (dominant vs. codominant markers, marker misclassification, negative and positive interference, and missing data) was analyzed using simulated data with ∼50-400 markers. High performance of the employed algorithm allows systematic treatment of the problem of verification of the obtained multilocus orders on the basis of computing-intensive bootstrap and/or jackknife approaches for detecting and removing questionable marker scores, thereby stabilizing the resulting maps. Parallel calculation technology can easily be adopted for further acceleration of the proposed algorithm. Real data analysis (on maize chromosome 1 with 230 markers) is provided to illustrate the proposed methodology.


Author(s):  
Andrew Jacobsen ◽  
Matthew Schlegel ◽  
Cameron Linke ◽  
Thomas Degris ◽  
Adam White ◽  
...  

This paper investigates different vector step-size adaptation approaches for non-stationary online, continual prediction problems. Vanilla stochastic gradient descent can be considerably improved by scaling the update with a vector of appropriately chosen step-sizes. Many methods, including AdaGrad, RMSProp, and AMSGrad, keep statistics about the learning process to approximate a second order update—a vector approximation of the inverse Hessian. Another family of approaches use meta-gradient descent to adapt the stepsize parameters to minimize prediction error. These metadescent strategies are promising for non-stationary problems, but have not been as extensively explored as quasi-second order methods. We first derive a general, incremental metadescent algorithm, called AdaGain, designed to be applicable to a much broader range of algorithms, including those with semi-gradient updates or even those with accelerations, such as RMSProp. We provide an empirical comparison of methods from both families. We conclude that methods from both families can perform well, but in non-stationary prediction problems the meta-descent methods exhibit advantages. Our method is particularly robust across several prediction problems, and is competitive with the state-of-the-art method on a large-scale, time-series prediction problem on real data from a mobile robot.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-31
Author(s):  
Haida Zhang ◽  
Zengfeng Huang ◽  
Xuemin Lin ◽  
Zhe Lin ◽  
Wenjie Zhang ◽  
...  

Driven by many real applications, we study the problem of seeded graph matching. Given two graphs and , and a small set of pre-matched node pairs where and , the problem is to identify a matching between and growing from , such that each pair in the matching corresponds to the same underlying entity. Recent studies on efficient and effective seeded graph matching have drawn a great deal of attention and many popular methods are largely based on exploring the similarity between local structures to identify matching pairs. While these recent techniques work provably well on random graphs, their accuracy is low over many real networks. In this work, we propose to utilize higher-order neighboring information to improve the matching accuracy and efficiency. As a result, a new framework of seeded graph matching is proposed, which employs Personalized PageRank (PPR) to quantify the matching score of each node pair. To further boost the matching accuracy, we propose a novel postponing strategy, which postpones the selection of pairs that have competitors with similar matching scores. We show that the postpone strategy indeed significantly improves the matching accuracy. To improve the scalability of matching large graphs, we also propose efficient approximation techniques based on algorithms for computing PPR heavy hitters. Our comprehensive experimental studies on large-scale real datasets demonstrate that, compared with state-of-the-art approaches, our framework not only increases the precision and recall both by a significant margin but also achieves speed-up up to more than one order of magnitude.


Sign in / Sign up

Export Citation Format

Share Document