Foundations of Computational Mathematics
Latest Publications


TOTAL DOCUMENTS

644
(FIVE YEARS 134)

H-INDEX

50
(FIVE YEARS 5)

Published By Springer-Verlag

1615-3383, 1615-3375

Author(s):  
Nicolas Mascot

AbstractWe describe a method to compute mod $$\ell $$ ℓ Galois representations contained in the $${{\text {H}}}_{\acute{\mathrm{e}}\mathrm{t}}^2$$ H e ´ t 2 of surfaces. We apply this method to the case of a representation with values in $${\text {GL}}_3(\mathbb {F}_9)$$ GL 3 ( F 9 ) attached to an eigenform over a congruence subgroup of $${\text {SL}}_3$$ SL 3 . We obtain, in particular, a polynomial with Galois group isomorphic to the simple group $${\text {PSU}}_3(\mathbb {F}_9)$$ PSU 3 ( F 9 ) and ramified at 2 and 3 only.


Author(s):  
Marco Fasondini ◽  
Sheehan Olver ◽  
Yuan Xu

AbstractOrthogonal polynomials in two variables on cubic curves are considered. For an integral with respect to an appropriate weight function defined on a cubic curve, an explicit basis of orthogonal polynomials is constructed in terms of two families of orthogonal polynomials in one variable. We show that these orthogonal polynomials can be used to approximate functions with cubic and square root singularities, and demonstrate their usage for solving differential equations with singular solutions.


Author(s):  
Albert Cohen ◽  
Wolfgang Dahmen ◽  
Hans Munthe-Kaas ◽  
Martín Sombra ◽  
Agnes Szanto

Author(s):  
Michael Griebel ◽  
Helmut Harbrecht

AbstractIn this article, we analyze tensor approximation schemes for continuous functions. We assume that the function to be approximated lies in an isotropic Sobolev space and discuss the cost when approximating this function in the continuous analogue of the Tucker tensor format or of the tensor train format. We especially show that the cost of both approximations are dimension-robust when the Sobolev space under consideration provides appropriate dimension weights.


Author(s):  
Assyr Abdulle ◽  
Giacomo Garegnani ◽  
Grigorios A. Pavliotis ◽  
Andrew M. Stuart ◽  
Andrea Zanoni

AbstractWe study the problem of drift estimation for two-scale continuous time series. We set ourselves in the framework of overdamped Langevin equations, for which a single-scale surrogate homogenized equation exists. In this setting, estimating the drift coefficient of the homogenized equation requires pre-processing of the data, often in the form of subsampling; this is because the two-scale equation and the homogenized single-scale equation are incompatible at small scales, generating mutually singular measures on the path space. We avoid subsampling and work instead with filtered data, found by application of an appropriate kernel function, and compute maximum likelihood estimators based on the filtered process. We show that the estimators we propose are asymptotically unbiased and demonstrate numerically the advantages of our method with respect to subsampling. Finally, we show how our filtered data methodology can be combined with Bayesian techniques and provide a full uncertainty quantification of the inference procedure.


Author(s):  
Tiangang Cui ◽  
Sergey Dolgov

AbstractCharacterising intractable high-dimensional random variables is one of the fundamental challenges in stochastic computation. The recent surge of transport maps offers a mathematical foundation and new insights for tackling this challenge by coupling intractable random variables with tractable reference random variables. This paper generalises the functional tensor-train approximation of the inverse Rosenblatt transport recently developed by Dolgov et al. (Stat Comput 30:603–625, 2020) to a wide class of high-dimensional non-negative functions, such as unnormalised probability density functions. First, we extend the inverse Rosenblatt transform to enable the transport to general reference measures other than the uniform measure. We develop an efficient procedure to compute this transport from a squared tensor-train decomposition which preserves the monotonicity. More crucially, we integrate the proposed order-preserving functional tensor-train transport into a nested variable transformation framework inspired by the layered structure of deep neural networks. The resulting deep inverse Rosenblatt transport significantly expands the capability of tensor approximations and transport maps to random variables with complicated nonlinear interactions and concentrated density functions. We demonstrate the efficiency of the proposed approach on a range of applications in statistical learning and uncertainty quantification, including parameter estimation for dynamical systems and inverse problems constrained by partial differential equations.


Author(s):  
Gilad Lerman ◽  
Yunpeng Shi

AbstractWe propose a general framework for solving the group synchronization problem, where we focus on the setting of adversarial or uniform corruption and sufficiently small noise. Specifically, we apply a novel message passing procedure that uses cycle consistency information in order to estimate the corruption levels of group ratios and consequently solve the synchronization problem in our setting. We first explain why the group cycle consistency information is essential for effectively solving group synchronization problems. We then establish exact recovery and linear convergence guarantees for the proposed message passing procedure under a deterministic setting with adversarial corruption. These guarantees hold as long as the ratio of corrupted cycles per edge is bounded by a reasonable constant. We also establish the stability of the proposed procedure to sub-Gaussian noise. We further establish exact recovery with high probability under a common uniform corruption model.


Author(s):  
P. Breiding ◽  
F. Sottile ◽  
J. Woodcock

AbstractWe initiate a study of the Euclidean distance degree in the context of sparse polynomials. Specifically, we consider a hypersurface $$f=0$$ f = 0 defined by a polynomial f that is general given its support, such that the support contains the origin. We show that the Euclidean distance degree of $$f=0$$ f = 0 equals the mixed volume of the Newton polytopes of the associated Lagrange multiplier equations. We discuss the implication of our result for computational complexity and give a formula for the Euclidean distance degree when the Newton polytope is a rectangular parallelepiped.


Sign in / Sign up

Export Citation Format

Share Document