scholarly journals A Block-Sparse Tensor Train Format for Sample-Efficient High-Dimensional Polynomial Regression

Author(s):  
Michael Götte ◽  
Reinhold Schneider ◽  
Philipp Trunschke

Low-rank tensors are an established framework for the parametrization of multivariate polynomials. We propose to extend this framework by including the concept of block-sparsity to efficiently parametrize homogeneous, multivariate polynomials with low-rank tensors. This provides a representation of general multivariate polynomials as a sum of homogeneous, multivariate polynomials, represented by block-sparse, low-rank tensors. We show that this sum can be concisely represented by a single block-sparse, low-rank tensor.We further prove cases, where low-rank tensors are particularly well suited by showing that for banded symmetric tensors of homogeneous polynomials the block sizes in the block-sparse multivariate polynomial space can be bounded independent of the number of variables.We showcase this format by applying it to high-dimensional least squares regression problems where it demonstrates improved computational resource utilization and sample efficiency.

2019 ◽  
Vol 19 (1) ◽  
pp. 39-53 ◽  
Author(s):  
Martin Eigel ◽  
Johannes Neumann ◽  
Reinhold Schneider ◽  
Sebastian Wolf

AbstractThis paper examines a completely non-intrusive, sample-based method for the computation of functional low-rank solutions of high-dimensional parametric random PDEs, which have become an area of intensive research in Uncertainty Quantification (UQ). In order to obtain a generalized polynomial chaos representation of the approximate stochastic solution, a novel black-box rank-adapted tensor reconstruction procedure is proposed. The performance of the described approach is illustrated with several numerical examples and compared to (Quasi-)Monte Carlo sampling.


2016 ◽  
Vol 6 (2) ◽  
pp. 109-130 ◽  
Author(s):  
Siu-Long Lei ◽  
Xu Chen ◽  
Xinhe Zhang

AbstractHigh-dimensional two-sided space fractional diffusion equations with variable diffusion coefficients are discussed. The problems can be solved by an implicit finite difference scheme that is proven to be uniquely solvable, unconditionally stable and first-order convergent in the infinity norm. A nonsingular multilevel circulant pre-conditoner is proposed to accelerate the convergence rate of the Krylov subspace linear system solver efficiently. The preconditoned matrix for fast convergence is a sum of the identity matrix, a matrix with small norm, and a matrix with low rank under certain conditions. Moreover, the preconditioner is practical, with an O(NlogN) operation cost and O(N) memory requirement. Illustrative numerical examples are also presented.


2021 ◽  
pp. 1-24
Author(s):  
G. Kronberger ◽  
F. O. de Franca ◽  
B. Burlacu ◽  
C. Haider ◽  
M. Kommenda

Abstract We investigate the addition of constraints on the function image and its derivatives for the incorporation of prior knowledge in symbolic regression. The approach is called shape-constrained symbolic regression and allows us to enforce e.g. monotonicity of the function over selected inputs. The aim is to find models which conform to expected behaviour and which have improved extrapolation capabilities. We demonstrate the feasibility of the idea and propose and compare two evolutionary algorithms for shapeconstrained symbolic regression: i) an extension of tree-based genetic programming which discards infeasible solutions in the selection step, and ii) a two population evolutionary algorithm that separates the feasible from the infeasible solutions. In both algorithms we use interval arithmetic to approximate bounds for models and their partial derivatives. The algorithms are tested on a set of 19 synthetic and four real-world regression problems. Both algorithms are able to identify models which conform to shape constraints which is not the case for the unmodified symbolic regression algorithms. However, the predictive accuracy of models with constraints is worse on the training set and the test set. Shape-constrained polynomial regression produces the best results for the test set but also significantly larger models.


Sign in / Sign up

Export Citation Format

Share Document