scholarly journals Data Interpolation by Near-Optimal Splines with Free Knots Using Linear Programming

Mathematics ◽  
2021 ◽  
Vol 9 (10) ◽  
pp. 1099
Author(s):  
Lakshman S. Thakur ◽  
Mikhail A. Bragin

The problem of obtaining an optimal spline with free knots is tantamount to minimizing derivatives of a nonlinear differentiable function over a Banach space on a compact set. While the problem of data interpolation by quadratic splines has been accomplished, interpolation by splines of higher orders is far more challenging. In this paper, to overcome difficulties associated with the complexity of the interpolation problem, the interval over which data points are defined is discretized and continuous derivatives are replaced by their discrete counterparts. The l∞-norm used for maximum rth order curvature (a derivative of order r) is then linearized, and the problem to obtain a near-optimal spline becomes a linear programming (LP) problem, which is solved in polynomial time by using LP methods, e.g., by using the Simplex method implemented in modern software such as CPLEX. It is shown that, as the mesh of the discretization approaches zero, a resulting near-optimal spline approaches an optimal spline. Splines with the desired accuracy can be obtained by choosing an appropriately fine mesh of the discretization. By using cubic splines as an example, numerical results demonstrate that the linear programming (LP) formulation, resulting from the discretization of the interpolation problem, can be solved by linear solvers with high computational efficiency and the resulting spline provides a good approximation to the sought-for optimal spline.

Author(s):  
Lakshman Thakur ◽  
Mikhail Bragin

Studies have shown that in many practical applications, data interpolation by splines leads to better approximation and higher computational efficiency as compared to data interpolation by a single polynomial. Data interpolation by splines can be significantly improved if knots are allowed to be free rather than at a priori fixed locations such as data points. In practical applications, the smallest possible curvature is often desired. Therefore, optimal splines are determined by minimizing a derivative of continuously differentiable functions comprising the spline of the required order. The problem of obtaining an optimal spline is tantamount to minimizing derivatives of a nonlinear differentiable function over a Banach space on a compact set. While the problem of data interpolation by quadratic splines has been accomplished analytically, interpolation by splines of higher orders or in higher dimensions is challenging. In this paper, to overcome difficulties associated with the complexity of the interpolation problem, the interval over which data points are defined, is discretized and continuous derivatives are substituted by their discrete counterparts. It is shown that as the mesh of the discretization approaches zero, a resulting near-optimal spline approaches an optimal spline. Splines with the desired accuracy can be obtained by choosing an appropriate mesh of the discretization. By using cubic splines as an example, numerical results demonstrate that the linear programming (LP) formulation, resulting from the discretization of the interpolation problem, can be solved by linear solvers with high computational efficiency and resulting splines provide a good approximation to the optimal splines.


Geophysics ◽  
1966 ◽  
Vol 31 (1) ◽  
pp. 253-259 ◽  
Author(s):  
E. L. Dougherty ◽  
S. T. Smith

The procedure used to discover subsurface formations where mineral resources may exist normally requires the accumulation and processing of large amounts of data concerning the earth’s fields. Data errors may strongly affect the conclusions drawn from the analysis. Thus, a method of checking for errors is essential. Since the field should be relatively smooth locally, a typical approach is to fit the data to a surface described by a low‐order polynomial. Deviations of data points from this surface can then be used to detect errors. Frequently a least‐squares approximation is used to determine the surface, but results could be misleading. Linear programming can be applied to give more satisfactory results. In this approach, the sum of the absolute values of the deviations is minimized rather than the squares of the deviations as in least squares. This paper describes in detail the formulation of the linear programming problem and cites an example of its application to error detection. Through this formulation, once errors are removed, the results are meaningful physically and, hence, can be used for detecting subsurface phenomena directly.


2015 ◽  
Vol 2015 ◽  
pp. 1-18 ◽  
Author(s):  
Dong Liang ◽  
Chen Qiao ◽  
Zongben Xu

The problems of improving computational efficiency and extending representational capability are the two hottest topics in approaches of global manifold learning. In this paper, a new method called extensive landmark Isomap (EL-Isomap) is presented, addressing both topics simultaneously. On one hand, originated from landmark Isomap (L-Isomap), which is known for its high computational efficiency property, EL-Isomap also possesses high computational efficiency through utilizing a small set of landmarks to embed all data points. On the other hand, EL-Isomap significantly extends the representational capability of L-Isomap and other global manifold learning approaches by utilizing only an available subset from the whole landmark set instead of all to embed each point. Particularly, compared with other manifold learning approaches, the data manifolds with intrinsic low-dimensional concave topologies and essential loops can be unwrapped by the new method more successfully, which are shown by simulation results on a series of synthetic and real-world data sets. Moreover, the accuracy, robustness, and computational complexity of EL-Isomap are analyzed in this paper, and the relation between EL-Isomap and L-Isomap is also discussed theoretically.


Author(s):  
P. Venkataraman

A challenging inverse problem is to identify the smooth function and the differential equation it represents from uncertain data. This paper extends the procedure previously developed for smooth data. The approach involves two steps. In the first step the data is smoothed using a recursive Bezier filter. For smooth data a single application of the filter is sufficient. The final set of data points provides a smooth estimate of the solution. More importantly, it will also identify smooth derivatives of the function away from the edges of the domain. In the second step the values of the function and its derivatives are used to establish a specific form of the differential equation from a particular class of the same. Since the function and its derivatives are known, the only unknowns are parameters describing the structure of the differential equations. These parameters are of two kinds: the exponents of the derivatives and the coefficients of the terms in the differential equations. These parameters can be determined by defining an optimization problem based on the residuals in a reduced domain. To avoid the trivial solution a discrete global search is used to identify these parameters. An example involving a third order constant coefficient linear differential equation is presented. A basic simulated annealing algorithm is used for the global search. Once the differential form is established, the unknown initial and boundary conditions can be obtained by backward and forward numerical integration from the reduced region.


1974 ◽  
Vol 18 (4) ◽  
pp. 402-410 ◽  
Author(s):  
M. M. Chawla ◽  
N. Jayarajan

Spitzbart [1] has considered a generalization of Hermite's interpolation formula in one variable and has obtained a polynomial p(x) of degree n + Σnj=0 = rj in x which interpolates to the values of a function and its derivatives up to order rj at xj, j = 0, 1,···n. Ahlin [2] has considered a bivariate generalization of Hermite's interpolation formula. He has developed a bivariate osculatory interpolation polynomial which agrees with f(x, y) and its partial and mixed partial derivatives up to a specified order at each of the nodes of a Cartesian grid. However, the above interpolation problem considered by Ahlin assumes that the values of partial and mixed partial derivatives of the same fixed order k – 1 are available at every point of the rectangular grid. It may also be observed that Ahlin's formula is essentially a Cartesian product of a special case of Spitzbart's formula in one variable.


2020 ◽  
Vol 19 (01) ◽  
pp. 21-42
Author(s):  
Raymond Cheng ◽  
Yuesheng Xu

We consider the minimum norm interpolation problem in the [Formula: see text] space, aiming at constructing a sparse interpolation solution. The original problem is reformulated in the pre-dual space, thereby inducing a norm in a related finite-dimensional Euclidean space. The dual problem is then transformed into a linear programming problem, which can be solved by existing methods. With that done, the original interpolation problem is reduced by solving an elementary finite-dimensional linear algebra equation. A specific example is presented to illustrate the proposed method, in which a sparse solution in the [Formula: see text] space is compared to the dense solution in the [Formula: see text] space. This example shows that a solution of the minimum norm interpolation problem in the [Formula: see text] space is indeed sparse, while that of the minimum norm interpolation problem in the [Formula: see text] space is not.


Geophysics ◽  
1985 ◽  
Vol 50 (12) ◽  
pp. 2831-2848 ◽  
Author(s):  
Pedro Gonzalez‐Casanova ◽  
Roman Alvarez

Modeling and contouring of geophysical data often require distributions of regularly spaced values. Splines have been shown to be the most accurate methods to obtain such distributions. We emphasize the general problem of interpolating random distributions of data on a given surface. Splines are classified into unidimensional, quasi‐bidimensional, and strictly bidimensional; based on this classification, a systematic derivation of the corresponding interpolating techniques is conducted. Two approaches are presented to obtain unidimensional splines: one based on the continuity of the first and second derivatives of the polynomials involved, and the other based on a variational approach. Quasi‐bidimensional splines are constructed based on the unidimensional approach, while strictly bidimensional splines are generated by minimizing the bidimensional curvature. Quasi‐bidimensional splines can be used for processing data distributions along nearly parallel lines; linear projections and parameterization are the techniques used in interpolating this type of distribution. Strictly bidimensional splines minimize curvature through the analytic solution of the Euler‐Lagrange equation or by a finite‐difference algorithm. The maximum error, mean error, and standard deviation between interpolated data and exact field values produced by various prisms show that quasi‐bidimensional splines are 2.7 percent more accurate in the maximum error than strictly bidimensional splines when both techniques are applied to regularly spaced data. However, for irregularly spaced data, three examples containing 300, 600, and 900 random data points show the superiority of the thin‐plate approach over the quasi‐bidimensional splines. A comparison between various interpolation densities on regular grids, starting from a set of 327 randomly distributed magnetic stations, illustrates some differences between geophysically meaningful interpolations and interpolations carried out only for contouring purposes.


Author(s):  
M. V. Ignatenko ◽  
L. A. Yanovich

In this paper, we consider the problem of the exact and approximate solutions of certain differential equations with variational derivatives of the first and second orders. Some information about the variational derivatives and explicit formulas for the exact solutions of the simplest equations with the first variational derivatives are given. An interpolation method for solving ordinary differential equations with variational derivatives is demonstrated. The general scheme of an approximate solution of the Cauchy problem for nonlinear differential equations with variational derivatives of the first order, based on the use of the operator interpolation apparatus, is presented. The exact solution of the differential equation of the hyperbolic type with variational derivatives, similar to the classical Dalamber solution, is obtained. The Hermite interpolation problem with the conditions of coincidence in the nodes of the interpolated and interpolation functionals, as well as their variational derivatives of the first and second orders, is considered for functionals defined on the sets of differentiable functions. The found explicit representation of the solution of the given interpolation problem is based on an arbitrary Chebyshev system of functions. This solution is generalized for the case of interpolation of functionals on one out of two variables and applied to construct an approximate solution of the Cauchy problem for the differential equation of the hyperbolic type with variational derivatives. The description of the material is illustrated by numerous examples.


2008 ◽  
Vol 24 (4) ◽  
pp. 1010-1043 ◽  
Author(s):  
Susanne M. Schennach

This paper establishes that the availability of instrumental variables enables the identification and the consistent estimation of nonparametric quantile regression models in the presence of measurement error in the regressors. The proposed estimator takes the form of a nonlinear functional of derivatives of conditional expectations and is shown to provide estimated quantile functions that are uniformly consistent over a compact set.


Sign in / Sign up

Export Citation Format

Share Document