A Proceedure for Systematically Fitting Local Volume Functions Using Orthogonal Polynomials

1970 ◽  
Vol 46 (3) ◽  
pp. 225-228
Author(s):  
L. C. Wensel ◽  
J. Van Roessel

The proposed technique is designed to fit orthogonal polynomial equations to existing local volume tables. This method has the advantage of being able to find the "best" fit of up to 8 powers (terms) such that (1) the function will yield volumes that increase with increasing diameter, and (2) will keep the end points of the curves within range. The computer program is designed so that it can be used with little or no knowledge of the computational process involved.

1992 ◽  
Vol 35 (3) ◽  
pp. 381-389
Author(s):  
William B. Jones ◽  
W. J. Thron ◽  
Nancy J. Wyshinski

AbstractIt is known that the n-th denominators Qn (α, β, z) of a real J-fractionwhereform an orthogonal polynomial sequence (OPS) with respect to a distribution function ψ(t) on ℝ. We use separate convergence results for continued fractions to prove the asymptotic formulathe convergence being uniform on compact subsets of


2005 ◽  
Vol 2005 (13) ◽  
pp. 2071-2079 ◽  
Author(s):  
E. Berriochoa ◽  
A. Cachafeiro ◽  
J. M. Garcia-Amor

We obtain a property which characterizes the Chebyshev orthogonal polynomials of first, second, third, and fourth kind. Indeed, we prove that the four Chebyshev sequences are the unique classical orthogonal polynomial families such that their linear combinations, with fixed length and constant coefficients, can be orthogonal polynomial sequences.


1987 ◽  
Vol 109 (1) ◽  
pp. 7-13 ◽  
Author(s):  
Maw-Ling Wang ◽  
Shwu-Yien Yang ◽  
Rong-Yeu Chang

Generalized orthogonal polynomials (GOP) which can represent all types of orthogonal polynomials and nonorthogonal Taylor series are first introduced to estimate the parameters of a dynamic state equation. The integration operation matrix (IOP) and the differentiation operation matrix (DOP) of the GOP are derived. The key idea of deriving IOP and DOP of these polynomials is that any orthogonal polynomial can be expressed by a power series and vice versa. By employing the IOP approach to the identification of time invariant systems, algorithms for computation which are effective, simple and straightforward compared to other orthogonal polynomial approximations can be obtained. The main advantage of using the differentiation operation matrix is that the parameter estimation can be carried out starting at an arbitrary time of interest. In addition, the computational algorithm is even simpler than that of the integral operation matrix. Illustrative examples for using IOP and DOP approaches are given. Very satisfactory results are obtained.


Mathematics ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 59
Author(s):  
Ayşegül Daşcıoğlu ◽  
Serpil Salınan

In this paper, a collocation method based on the orthogonal polynomials is presented to solve the fractional integral equations. Six numerical examples are given to illustrate the method. The results are compared with the other methods in the literature, and the results obtained by different kinds of polynomials are compared.


1982 ◽  
Vol 25 (3) ◽  
pp. 291-295 ◽  
Author(s):  
Lance L. Littlejohn ◽  
Samuel D. Shore

AbstractOne of the more popular problems today in the area of orthogonal polynomials is the classification of all orthogonal polynomial solutions to the second order differential equation:In this paper, we show that the Laguerre type and Jacobi type polynomials satisfy such a second order equation.


Geophysics ◽  
1977 ◽  
Vol 42 (6) ◽  
pp. 1265-1276 ◽  
Author(s):  
Anthony F. Gangi ◽  
James N. Shapiro

An algorithm is described which iteratively solves for the coefficients of successively higher‐order, least‐squares polynomial fits in terms of the results for the previous, lower‐order polynomial fit. The technique takes advantage of the special properties of the least‐squares or Hankel matrix, for which [Formula: see text]. Only the first and last column vectors of the inverse matrix are needed at each stage to continue the iteration to the next higher stage. An analogous procedure may be used to determine the inverse of such least‐squares type matrices. The inverse of each square submatrix is determined from the inverse of the previous, lower‐order submatrix. The results using this algorithm are compared with the method of fitting orthogonal polynomials to data points. While the latter method gives higher accuracy when high‐order polynomials are fitted to the data, it requires many more computations. The increased accuracy of the orthogonal‐polynomial fit is valuable when high precision of fitting is required; however, for experimental data with inherent inaccuracies, the added computations outweigh the possible benefit derived from the more accurate fitting. A Fortran listing of the algorithm is given.


1990 ◽  
Vol 23 (3) ◽  
pp. 507-527 ◽  
Author(s):  
Mohamed A. Korany ◽  
M. Abdel-Hady Elsayed ◽  
Mona Bedair ◽  
Hoda Mahgoub ◽  
Ezzat A. Korany

1993 ◽  
Vol 46 (1-2) ◽  
pp. 174-198 ◽  
Author(s):  
Walter Gautschi ◽  
Shikang Li

1991 ◽  
Vol 14 (2) ◽  
pp. 393-397
Author(s):  
A. McD. Mercer

In this note it is shown that a fairly recent result on the asymptotic distribution of the zeros of generalized polynomials can be deduced from an old theorem ofG. Polya, using a minimum of orthogonal polynomial theory.


2018 ◽  
Vol 1 (20) ◽  
pp. 559-572
Author(s):  
ولدان وليد محمود

: The subject of orthogonal polynomials cuts across a large piece of mathematics and its applications. In this paper we give a  survey of the orthogonal polynomial solutions of second-order and fourth-order linear ordinary differential equation, on the generated self-adjoint differential operators .


Sign in / Sign up

Export Citation Format

Share Document