Iterative k Data Algorithm for solving both the least squares SVM and the system of linear equations

Author(s):  
Vojislav Kecman
2011 ◽  
Vol 2011 ◽  
pp. 1-5 ◽  
Author(s):  
Czesław Stępniak

The least squares problem appears, among others, in linear models, and it refers to inconsistent system of linear equations. A crucial question is how to reduce the least squares solution in such a system to the usual solution in a consistent one. Traditionally, this is reached by differential calculus. We present a purely algebraic approach to this problem based on some identities for nonhomogeneous quadratic forms.


Author(s):  
Jack-Kang Chan

We show that the well-known least squares (LS) solution of an overdetermined system of linear equations is a convex combination of all the non-trivial solutions weighed by the squares of the corresponding denominator determinants of the Cramer's rule. This Least Squares Decomposition (LSD) gives an alternate statistical interpretation of least squares, as well as another geometric meaning. Furthermore, when the singular values of the matrix of the overdetermined system are not small, the LSD may be able to provide flexible solutions. As an illustration, we apply the LSD to interpret the LS-solution in the problem of source localization.


2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Lo-Chyuan Su ◽  
Yue-Dar Jou ◽  
Fu-Kun Chen

All-pass filter design can be generally achieved by solving a system of linear equations. The associated matrices involved in the set of linear equations can be further formulated as a Toeplitz-plus-Hankel form such that a matrix inversion is avoided. Consequently, the optimal filter coefficients can be solved by using computationally efficient Levinson algorithms or Cholesky decomposition technique. In this paper, based on trigonometric identities and sampling the frequency band of interest uniformly, the authors proposed closed-form expressions to compute the elements of the Toeplitz-plus-Hankel matrix required in the least-squares design of IIR all-pass filters. Simulation results confirm that the proposed method achieves good performance as well as effectiveness.


Author(s):  
David Ek ◽  
Anders Forsgren

AbstractThe focus in this paper is interior-point methods for bound-constrained nonlinear optimization, where the system of nonlinear equations that arise are solved with Newton’s method. There is a trade-off between solving Newton systems directly, which give high quality solutions, and solving many approximate Newton systems which are computationally less expensive but give lower quality solutions. We propose partial and full approximate solutions to the Newton systems. The specific approximate solution depends on estimates of the active and inactive constraints at the solution. These sets are at each iteration estimated by basic heuristics. The partial approximate solutions are computationally inexpensive, whereas a system of linear equations needs to be solved for the full approximate solution. The size of the system is determined by the estimate of the inactive constraints at the solution. In addition, we motivate and suggest two Newton-like approaches which are based on an intermediate step that consists of the partial approximate solutions. The theoretical setting is introduced and asymptotic error bounds are given. We also give numerical results to investigate the performance of the approximate solutions within and beyond the theoretical framework.


2019 ◽  
Vol 2019 (1) ◽  
Author(s):  
A. Khalid ◽  
M. N. Naeem ◽  
P. Agarwal ◽  
A. Ghaffar ◽  
Z. Ullah ◽  
...  

AbstractIn the current paper, authors proposed a computational model based on the cubic B-spline method to solve linear 6th order BVPs arising in astrophysics. The prescribed method transforms the boundary problem to a system of linear equations. The algorithm we are going to develop in this paper is not only simply the approximation solution of the 6th order BVPs using cubic B-spline, but it also describes the estimated derivatives of 1st order to 6th order of the analytic solution at the same time. This novel technique has lesser computational cost than numerous other techniques and is second order convergent. To show the efficiency of the proposed method, four numerical examples have been tested. The results are described using error tables and graphs and are compared with the results existing in the literature.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Mohsen Alipour ◽  
Dumitru Baleanu ◽  
Fereshteh Babaei

We introduce a new combination of Bernstein polynomials (BPs) and Block-Pulse functions (BPFs) on the interval [0, 1]. These functions are suitable for finding an approximate solution of the second kind integral equation. We call this method Hybrid Bernstein Block-Pulse Functions Method (HBBPFM). This method is very simple such that an integral equation is reduced to a system of linear equations. On the other hand, convergence analysis for this method is discussed. The method is computationally very simple and attractive so that numerical examples illustrate the efficiency and accuracy of this method.


Sign in / Sign up

Export Citation Format

Share Document