scholarly journals Fast Linear Interpolation

2021 ◽  
Vol 17 (2) ◽  
pp. 1-15
Author(s):  
Nathan Zhang ◽  
Kevin Canini ◽  
Sean Silva ◽  
Maya Gupta

We present fast implementations of linear interpolation operators for piecewise linear functions and multi-dimensional look-up tables. These operators are common for efficient transformations in image processing and are the core operations needed for lattice models like deep lattice networks, a popular machine learning function class for interpretable, shape-constrained machine learning. We present new strategies for an efficient compiler-based solution using MLIR to accelerate linear interpolation. For real-world machine-learned multi-layer lattice models that use multidimensional linear interpolation, we show these strategies run 5-10× faster on a standard CPU compared to an optimized C++ interpreter implementation.

Author(s):  
Arturo Sarmiento-Reyes ◽  
Luis Hernandez-Martinez ◽  
Miguel Angel Gutierrez de Anda ◽  
Francisco Javier Castro Gonzalez

We describe a sense in which mesh duality is equivalent to Legendre duality. That is, a general pair of meshes, which satisfy a definition of duality for meshes, are shown to be the projection of a pair of piecewise linear functions that are dual to each other in the sense of a Legendre dual transformation. In applications the latter functions can be a tangent plane approximation to a smoother function, and a chordal plane approximation to its Legendre dual. Convex examples include one from meteorology, and also the relation between the Delaunay mesh and the Voronoi tessellation. The latter are shown to be the projections of tangent plane and chordal approximations to the same paraboloid.


Author(s):  
Emir Demirovic ◽  
Peter J. Stuckey ◽  
James Bailey ◽  
Jeffrey Chan ◽  
Christopher Leckie ◽  
...  

We study the predict+optimise problem, where machine learning and combinatorial optimisation must interact to achieve a common goal. These problems are important when optimisation needs to be performed on input parameters that are not fully observed but must instead be estimated using machine learning. Our contributions are two-fold: 1) we provide theoretical insight into the properties and computational complexity of predict+optimise problems in general, and 2) develop a novel framework that, in contrast to related work, guarantees to compute the optimal parameters for a linear learning function given any ranking optimisation problem. We illustrate the applicability of our framework for the particular case of the unit-weighted knapsack predict+optimise problem and evaluate on benchmarks from the literature.


Algorithms ◽  
2020 ◽  
Vol 13 (7) ◽  
pp. 166 ◽  
Author(s):  
Andreas Griewank ◽  
Andrea Walther

For piecewise linear functions f : R n ↦ R we show how their abs-linear representation can be extended to yield simultaneously their decomposition into a convex f ˇ and a concave part f ^ , including a pair of generalized gradients g ˇ ∈ R n ∋ g ^ . The latter satisfy strict chain rules and can be computed in the reverse mode of algorithmic differentiation, at a small multiple of the cost of evaluating f itself. It is shown how f ˇ and f ^ can be expressed as a single maximum and a single minimum of affine functions, respectively. The two subgradients g ˇ and − g ^ are then used to drive DCA algorithms, where the (convex) inner problem can be solved in finitely many steps, e.g., by a Simplex variant or the true steepest descent method. Using a reflection technique to update the gradients of the concave part, one can ensure finite convergence to a local minimizer of f, provided the Linear Independence Kink Qualification holds. For piecewise smooth objectives the approach can be used as an inner method for successive piecewise linearization.


Sign in / Sign up

Export Citation Format

Share Document