scholarly journals Data Interpolation by Optimal Splines with Free Knots Using Linear Programming

Author(s):  
Lakshman Thakur ◽  
Mikhail Bragin

Studies have shown that in many practical applications, data interpolation by splines leads to better approximation and higher computational efficiency as compared to data interpolation by a single polynomial. Data interpolation by splines can be significantly improved if knots are allowed to be free rather than at a priori fixed locations such as data points. In practical applications, the smallest possible curvature is often desired. Therefore, optimal splines are determined by minimizing a derivative of continuously differentiable functions comprising the spline of the required order. The problem of obtaining an optimal spline is tantamount to minimizing derivatives of a nonlinear differentiable function over a Banach space on a compact set. While the problem of data interpolation by quadratic splines has been accomplished analytically, interpolation by splines of higher orders or in higher dimensions is challenging. In this paper, to overcome difficulties associated with the complexity of the interpolation problem, the interval over which data points are defined, is discretized and continuous derivatives are substituted by their discrete counterparts. It is shown that as the mesh of the discretization approaches zero, a resulting near-optimal spline approaches an optimal spline. Splines with the desired accuracy can be obtained by choosing an appropriate mesh of the discretization. By using cubic splines as an example, numerical results demonstrate that the linear programming (LP) formulation, resulting from the discretization of the interpolation problem, can be solved by linear solvers with high computational efficiency and resulting splines provide a good approximation to the optimal splines.

Mathematics ◽  
2021 ◽  
Vol 9 (10) ◽  
pp. 1099
Author(s):  
Lakshman S. Thakur ◽  
Mikhail A. Bragin

The problem of obtaining an optimal spline with free knots is tantamount to minimizing derivatives of a nonlinear differentiable function over a Banach space on a compact set. While the problem of data interpolation by quadratic splines has been accomplished, interpolation by splines of higher orders is far more challenging. In this paper, to overcome difficulties associated with the complexity of the interpolation problem, the interval over which data points are defined is discretized and continuous derivatives are replaced by their discrete counterparts. The l∞-norm used for maximum rth order curvature (a derivative of order r) is then linearized, and the problem to obtain a near-optimal spline becomes a linear programming (LP) problem, which is solved in polynomial time by using LP methods, e.g., by using the Simplex method implemented in modern software such as CPLEX. It is shown that, as the mesh of the discretization approaches zero, a resulting near-optimal spline approaches an optimal spline. Splines with the desired accuracy can be obtained by choosing an appropriately fine mesh of the discretization. By using cubic splines as an example, numerical results demonstrate that the linear programming (LP) formulation, resulting from the discretization of the interpolation problem, can be solved by linear solvers with high computational efficiency and the resulting spline provides a good approximation to the sought-for optimal spline.


1986 ◽  
Vol 108 (1) ◽  
pp. 86-89 ◽  
Author(s):  
Keigo Watanabe

The Weineret-Desai smoother formula is applied to derive new decentralized fixed-interval smoothing algorithms for a decentralized estimation structure consisting of a central processor and of M local processors. Such algorithms are based on decentralizing the estimates of global backward information filter and obtained from the use of the superposition principle in scattering framework. The smoothing update problem is also investigated to illustrate the application of the proposed algorithms. The emphasis is on computational efficiency, independence of local a priori statistics, and flexibility of implementation.


2021 ◽  
Vol 11 (22) ◽  
pp. 10713
Author(s):  
Dong-Gyu Lee

Autonomous driving is a safety-critical application that requires a high-level understanding of computer vision with real-time inference. In this study, we focus on the computational efficiency of an important factor by improving the running time and performing multiple tasks simultaneously for practical applications. We propose a fast and accurate multi-task learning-based architecture for joint segmentation of drivable area, lane line, and classification of the scene. An encoder-decoder architecture efficiently handles input frames through shared representation. A comprehensive understanding of the driving environment is improved by generalization and regularization from different tasks. The proposed method learns end-to-end through multi-task learning on a very challenging Berkeley Deep Drive dataset and shows its robustness for three tasks in autonomous driving. Experimental results show that the proposed method outperforms other multi-task learning approaches in both speed and accuracy. The computational efficiency of the method was over 93.81 fps at inference, enabling execution in real-time.


2010 ◽  
Vol 108-111 ◽  
pp. 1439-1445
Author(s):  
Shahed Shojaeipour ◽  
Sallehuddin Mohamed Haris ◽  
Ehsan Eftekhari ◽  
Ali Shojaeipour ◽  
Ronak Daghigh

In this article, the development of an autonomous robot trajectory generation system based on a single eye-in-hand webcam, where the workspace map is not known a priori, is described. The system makes use of image processing methods to identify locations of obstacles within the workspace and the Quadtree Decomposition algorithm to generate collision free paths. The shortest path is then automatically chosen as the path to be traversed by the robot end-effector. The method was implemented using MATLAB running on a PC and tested on a two-link SCARA robotic arm. The tests were successful and indicate that the method could be feasibly implemented on many practical applications.


Geophysics ◽  
1966 ◽  
Vol 31 (1) ◽  
pp. 253-259 ◽  
Author(s):  
E. L. Dougherty ◽  
S. T. Smith

The procedure used to discover subsurface formations where mineral resources may exist normally requires the accumulation and processing of large amounts of data concerning the earth’s fields. Data errors may strongly affect the conclusions drawn from the analysis. Thus, a method of checking for errors is essential. Since the field should be relatively smooth locally, a typical approach is to fit the data to a surface described by a low‐order polynomial. Deviations of data points from this surface can then be used to detect errors. Frequently a least‐squares approximation is used to determine the surface, but results could be misleading. Linear programming can be applied to give more satisfactory results. In this approach, the sum of the absolute values of the deviations is minimized rather than the squares of the deviations as in least squares. This paper describes in detail the formulation of the linear programming problem and cites an example of its application to error detection. Through this formulation, once errors are removed, the results are meaningful physically and, hence, can be used for detecting subsurface phenomena directly.


2002 ◽  
Vol 8 (3) ◽  
pp. 197-205 ◽  
Author(s):  
Carlos F. Alastruey ◽  
Manuel de la Sen

In this paper, a Lyapunov function candidate is introduced for multivariable systems with inner delays, without assuminga prioristability for the nondelayed subsystem. By using this Lyapunov function, a controller is deduced. Such a controller utilizes an input–output description of the original system, a circumstance that facilitates practical applications of the proposed approach.


Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1540
Author(s):  
Boris Pérez-Cañedo ◽  
José Luis Verdegay ◽  
Eduardo René Concepción-Morales ◽  
Alejandro Rosete

Fuzzy Linear Programming (FLP) has addressed the increasing complexity of real-world decision-making problems that arise in uncertain and ever-changing environments since its introduction in the 1970s. Built upon the Fuzzy Sets theory and classical Linear Programming (LP) theory, FLP encompasses an extensive area of theoretical research and algorithmic development. Unlike classical LP, there is not a unique model for the FLP problem, since fuzziness can appear in the model components in different ways. Hence, despite fifty years of research, new formulations of FLP problems and solution methods are still being proposed. Among the existing formulations, those using fuzzy numbers (FNs) as parameters and/or decision variables for handling inexactness and vagueness in data have experienced a remarkable development in recent years. Here, a long-standing issue has been how to deal with FN-valued objective functions and with constraints whose left- and right-hand sides are FNs. The main objective of this paper is to present an updated review of advances in this particular area. Consequently, the paper briefly examines well-known models and methods for FLP, and expands on methods for fuzzy single- and multi-objective LP that use lexicographic criteria for ranking FNs. A lexicographic approach to the fuzzy linear assignment (FLA) problem is discussed in detail due to the theoretical and practical relevance. For this case, computer codes are provided that can be used to reproduce results presented in the paper and for practical applications. The paper demonstrates that FLP that is focused on lexicographic methods is an active area with promising research lines and practical implications.


2015 ◽  
Vol 2015 ◽  
pp. 1-18 ◽  
Author(s):  
Dong Liang ◽  
Chen Qiao ◽  
Zongben Xu

The problems of improving computational efficiency and extending representational capability are the two hottest topics in approaches of global manifold learning. In this paper, a new method called extensive landmark Isomap (EL-Isomap) is presented, addressing both topics simultaneously. On one hand, originated from landmark Isomap (L-Isomap), which is known for its high computational efficiency property, EL-Isomap also possesses high computational efficiency through utilizing a small set of landmarks to embed all data points. On the other hand, EL-Isomap significantly extends the representational capability of L-Isomap and other global manifold learning approaches by utilizing only an available subset from the whole landmark set instead of all to embed each point. Particularly, compared with other manifold learning approaches, the data manifolds with intrinsic low-dimensional concave topologies and essential loops can be unwrapped by the new method more successfully, which are shown by simulation results on a series of synthetic and real-world data sets. Moreover, the accuracy, robustness, and computational complexity of EL-Isomap are analyzed in this paper, and the relation between EL-Isomap and L-Isomap is also discussed theoretically.


Author(s):  
B. L. N. Kennett

A wide range of methods exist for interpolation between spatially distributed points drawn from a single population. Yet often multiple datasets are available with differing distribution, character and reliability. A simple scheme is introduced to allow the fusion of multiple datasets. Each dataset is assigned an a priori spatial influence zone around each point and a relative weight based on its physical character. The composite result at a specific location is a weighted combination of the spatial terms for all the available data points that make a significant contribution. The combination of multiple datasets is illustrated with the construction of a unified Moho surface in part of southern Australia from results exploiting a variety of different styles of analysis.


Sign in / Sign up

Export Citation Format

Share Document