Convergence rates of high dimensional Smolyak quadrature

2020 ◽  
Vol 54 (4) ◽  
pp. 1259-1307
Author(s):  
Jakob Zech ◽  
Christoph Schwab

We analyse convergence rates of Smolyak integration for parametric maps u: U → X taking values in a Banach space X, defined on the parameter domain U = [−1,1]N. For parametric maps which are sparse, as quantified by summability of their Taylor polynomial chaos coefficients, dimension-independent convergence rates superior to N-term approximation rates under the same sparsity are achievable. We propose a concrete Smolyak algorithm to a priori identify integrand-adapted sets of active multiindices (and thereby unisolvent sparse grids of quadrature points) via upper bounds for the integrands’ Taylor gpc coefficients. For so-called “(b,ε)-holomorphic” integrands u with b∈lp(∕) for some p ∈ (0, 1), we prove the dimension-independent convergence rate 2/p − 1 in terms of the number of quadrature points. The proposed Smolyak algorithm is proved to yield (essentially) the same rate in terms of the total computational cost for both nested and non-nested univariate quadrature points. Numerical experiments and a mathematical sparsity analysis accounting for cancellations in quadratures and in the combination formula demonstrate that the asymptotic rate 2/p − 1 is realized computationally for a moderate number of quadrature points under certain circumstances. By a refined analysis of model integrand classes we show that a generally large preasymptotic range otherwise precludes reaching the asymptotic rate 2/p − 1 for practically relevant numbers of quadrature points.

2018 ◽  
Vol 52 (2) ◽  
pp. 631-657 ◽  
Author(s):  
Peng Chen

In this work we analyze the dimension-independent convergence property of an abstract sparse quadrature scheme for numerical integration of functions of high-dimensional parameters with Gaussian measure. Under certain assumptions on the exactness and boundedness of univariate quadrature rules as well as on the regularity assumptions on the parametric functions with respect to the parameters, we prove that the convergence of the sparse quadrature error is independent of the number of the parameter dimensions. Moreover, we propose both an a priori and an a posteriori schemes for the construction of a practical sparse quadrature rule and perform numerical experiments to demonstrate their dimension-independent convergence rates.


2018 ◽  
Vol 26 (3) ◽  
pp. 347-380 ◽  
Author(s):  
Stephen Kelly ◽  
Malcolm I. Heywood

Algorithms that learn through environmental interaction and delayed rewards, or reinforcement learning (RL), increasingly face the challenge of scaling to dynamic, high-dimensional, and partially observable environments. Significant attention is being paid to frameworks from deep learning, which scale to high-dimensional data by decomposing the task through multilayered neural networks. While effective, the representation is complex and computationally demanding. In this work, we propose a framework based on genetic programming which adaptively complexifies policies through interaction with the task. We make a direct comparison with several deep reinforcement learning frameworks in the challenging Atari video game environment as well as more traditional reinforcement learning frameworks based on a priori engineered features. Results indicate that the proposed approach matches the quality of deep learning while being a minimum of three orders of magnitude simpler with respect to model complexity. This results in real-time operation of the champion RL agent without recourse to specialized hardware support. Moreover, the approach is capable of evolving solutions to multiple game titles simultaneously with no additional computational cost. In this case, agent behaviours for an individual game as well as single agents capable of playing all games emerge from the same evolutionary run.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3327
Author(s):  
Vicente Román ◽  
Luis Payá ◽  
Adrián Peidró ◽  
Mónica Ballesta ◽  
Oscar Reinoso

Over the last few years, mobile robotics has experienced a great development thanks to the wide variety of problems that can be solved with this technology. An autonomous mobile robot must be able to operate in a priori unknown environments, planning its trajectory and navigating to the required target points. With this aim, it is crucial solving the mapping and localization problems with accuracy and acceptable computational cost. The use of omnidirectional vision systems has emerged as a robust choice thanks to the big quantity of information they can extract from the environment. The images must be processed to obtain relevant information that permits solving robustly the mapping and localization problems. The classical frameworks to address this problem are based on the extraction, description and tracking of local features or landmarks. However, more recently, a new family of methods has emerged as a robust alternative in mobile robotics. It consists of describing each image as a whole, what leads to conceptually simpler algorithms. While methods based on local features have been extensively studied and compared in the literature, those based on global appearance still merit a deep study to uncover their performance. In this work, a comparative evaluation of six global-appearance description techniques in localization tasks is carried out, both in terms of accuracy and computational cost. Some sets of images captured in a real environment are used with this aim, including some typical phenomena such as changes in lighting conditions, visual aliasing, partial occlusions and noise.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 645
Author(s):  
Muhammad Farooq ◽  
Sehrish Sarfraz ◽  
Christophe Chesneau ◽  
Mahmood Ul Hassan ◽  
Muhammad Ali Raza ◽  
...  

Expectiles have gained considerable attention in recent years due to wide applications in many areas. In this study, the k-nearest neighbours approach, together with the asymmetric least squares loss function, called ex-kNN, is proposed for computing expectiles. Firstly, the effect of various distance measures on ex-kNN in terms of test error and computational time is evaluated. It is found that Canberra, Lorentzian, and Soergel distance measures lead to minimum test error, whereas Euclidean, Canberra, and Average of (L1,L∞) lead to a low computational cost. Secondly, the performance of ex-kNN is compared with existing packages er-boost and ex-svm for computing expectiles that are based on nine real life examples. Depending on the nature of data, the ex-kNN showed two to 10 times better performance than er-boost and comparable performance with ex-svm regarding test error. Computationally, the ex-kNN is found two to five times faster than ex-svm and much faster than er-boost, particularly, in the case of high dimensional data.


Mathematics ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 222
Author(s):  
Juan C. Laria ◽  
M. Carmen Aguilera-Morillo ◽  
Enrique Álvarez ◽  
Rosa E. Lillo ◽  
Sara López-Taruella ◽  
...  

Over the last decade, regularized regression methods have offered alternatives for performing multi-marker analysis and feature selection in a whole genome context. The process of defining a list of genes that will characterize an expression profile remains unclear. It currently relies upon advanced statistics and can use an agnostic point of view or include some a priori knowledge, but overfitting remains a problem. This paper introduces a methodology to deal with the variable selection and model estimation problems in the high-dimensional set-up, which can be particularly useful in the whole genome context. Results are validated using simulated data and a real dataset from a triple-negative breast cancer study.


2021 ◽  
Vol 47 (1) ◽  
Author(s):  
Fabian Laakmann ◽  
Philipp Petersen

AbstractWe demonstrate that deep neural networks with the ReLU activation function can efficiently approximate the solutions of various types of parametric linear transport equations. For non-smooth initial conditions, the solutions of these PDEs are high-dimensional and non-smooth. Therefore, approximation of these functions suffers from a curse of dimension. We demonstrate that through their inherent compositionality deep neural networks can resolve the characteristic flow underlying the transport equations and thereby allow approximation rates independent of the parameter dimension.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Sansit Patnaik ◽  
Fabio Semperlotti

AbstractThis study presents the formulation, the numerical solution, and the validation of a theoretical framework based on the concept of variable-order mechanics and capable of modeling dynamic fracture in brittle and quasi-brittle solids. More specifically, the reformulation of the elastodynamic problem via variable and fractional-order operators enables a unique and extremely powerful approach to model nucleation and propagation of cracks in solids under dynamic loading. The resulting dynamic fracture formulation is fully evolutionary, hence enabling the analysis of complex crack patterns without requiring any a priori assumption on the damage location and the growth path, and without using any algorithm to numerically track the evolving crack surface. The evolutionary nature of the variable-order formalism also prevents the need for additional partial differential equations to predict the evolution of the damage field, hence suggesting a conspicuous reduction in complexity and computational cost. Remarkably, the variable-order formulation is naturally capable of capturing extremely detailed features characteristic of dynamic crack propagation such as crack surface roughening as well as single and multiple branching. The accuracy and robustness of the proposed variable-order formulation are validated by comparing the results of direct numerical simulations with experimental data of typical benchmark problems available in the literature.


2021 ◽  
Vol 143 (8) ◽  
Author(s):  
Opeoluwa Owoyele ◽  
Pinaki Pal ◽  
Alvaro Vidal Torreira

AbstractThe use of machine learning (ML)-based surrogate models is a promising technique to significantly accelerate simulation-driven design optimization of internal combustion (IC) engines, due to the high computational cost of running computational fluid dynamics (CFD) simulations. However, training the ML models requires hyperparameter selection, which is often done using trial-and-error and domain expertise. Another challenge is that the data required to train these models are often unknown a priori. In this work, we present an automated hyperparameter selection technique coupled with an active learning approach to address these challenges. The technique presented in this study involves the use of a Bayesian approach to optimize the hyperparameters of the base learners that make up a super learner model. In addition to performing hyperparameter optimization (HPO), an active learning approach is employed, where the process of data generation using simulations, ML training, and surrogate optimization is performed repeatedly to refine the solution in the vicinity of the predicted optimum. The proposed approach is applied to the optimization of a compression ignition engine with control parameters relating to fuel injection, in-cylinder flow, and thermodynamic conditions. It is demonstrated that by automatically selecting the best values of the hyperparameters, a 1.6% improvement in merit value is obtained, compared to an improvement of 1.0% with default hyperparameters. Overall, the framework introduced in this study reduces the need for technical expertise in training ML models for optimization while also reducing the number of simulations needed for performing surrogate-based design optimization.


Author(s):  
Muhammad Hassan ◽  
Benjamin Stamm

In this article, we analyse an integral equation of the second kind that represents the solution of N interacting dielectric spherical particles undergoing mutual polarisation. A traditional analysis can not quantify the scaling of the stability constants- and thus the approximation error- with respect to the number N of involved dielectric spheres. We develop a new a priori error analysis that demonstrates N-independent stability of the continuous and discrete formulations of the integral equation. Consequently, we obtain convergence rates that are independent of N.


Author(s):  
Jose Carrillo ◽  
Shi Jin ◽  
Lei Li ◽  
Yuhua Zhu

We improve recently introduced consensus-based optimization method, proposed in [R. Pinnau, C. Totzeck, O. Tse and S. Martin, Math. Models Methods Appl. Sci., 27(01):183{204, 2017], which is a gradient-free optimization method for general nonconvex functions. We rst replace the isotropic geometric Brownian motion by the component-wise one, thus removing the dimensionality dependence of the drift rate, making the method more competitive for high dimensional optimization problems. Secondly, we utilize the random mini-batch ideas to reduce the computational cost of calculating the weighted average which the individual particles tend to relax toward. For its mean- eld limit{a nonlinear Fokker-Planck equation{we prove, in both time continuous and semi-discrete settings, that the convergence of the method, which is exponential in time, is guaranteed with parameter constraints independent of the dimensionality. We also conduct numerical tests to high dimensional problems to check the success rate of the method.


Sign in / Sign up

Export Citation Format

Share Document