scholarly journals On Computational Aspects of Krawtchouk Polynomials for High Orders

2020 ◽  
Vol 6 (8) ◽  
pp. 81 ◽  
Author(s):  
Basheera M. Mahmmod ◽  
Alaa M. Abdul-Hadi ◽  
Sadiq H. Abdulhussain ◽  
Aseel Hussien

Discrete Krawtchouk polynomials are widely utilized in different fields for their remarkable characteristics, specifically, the localization property. Discrete orthogonal moments are utilized as a feature descriptor for images and video frames in computer vision applications. In this paper, we present a new method for computing discrete Krawtchouk polynomial coefficients swiftly and efficiently. The presented method proposes a new initial value that does not tend to be zero as the polynomial size increases. In addition, a combination of the existing recurrence relations is presented which are in the n- and x-directions. The utilized recurrence relations are developed to reduce the computational cost. The proposed method computes approximately 12.5% of the polynomial coefficients, and then symmetry relations are employed to compute the rest of the polynomial coefficients. The proposed method is evaluated against existing methods in terms of computational cost and maximum size can be generated. In addition, a reconstruction error analysis for image is performed using the proposed method for large signal sizes. The evaluation shows that the proposed method outperforms other existing methods.

Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1162
Author(s):  
Khaled A. AL-Utaibi ◽  
Sadiq H. Abdulhussain ◽  
Basheera M. Mahmmod ◽  
Marwah Abdulrazzaq Naser ◽  
Muntadher Alsabah ◽  
...  

Krawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the initial value of the KP parameter. In addition, a new diagonal recurrence relation is introduced and used in the proposed algorithm. The diagonal recurrence algorithm was derived from the existing n direction and x direction recurrence algorithms. The diagonal and existing recurrence algorithms were subsequently exploited to compute the KP coefficients. First, the KP coefficients were computed for one partition after dividing the KP plane into four. To compute the KP coefficients in the other partitions, the symmetry relations were exploited. The performance evaluation of the proposed recurrence algorithm was determined through different comparisons which were carried out in state-of-the-art works in terms of reconstruction error, polynomial size, and computation cost. The obtained results indicate that the proposed algorithm is reliable and computes lesser coefficients when compared to the existing algorithms across wide ranges of parameter values of p and polynomial sizes N. The results also show that the improvement ratio of the computed coefficients ranges from 18.64% to 81.55% in comparison to the existing algorithms. Besides this, the proposed algorithm can generate polynomials of an order ∼8.5 times larger than those generated using state-of-the-art algorithms.


2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Peter S. Chami ◽  
Bernd Sing ◽  
Norris Sookoo

We investigate polynomials, called m-polynomials, whose generator polynomial has coefficients that can be arranged in a square matrix; in particular, the case where this matrix is a Hadamard matrix is considered. Orthogonality relations and recurrence relations are established, and coefficients for the expansion of any polynomial in terms of m-polynomials are obtained. We conclude this paper by an implementation of m-polynomials and some of the results obtained for them in Mathematica.


2018 ◽  
Vol 2018 ◽  
pp. 1-15
Author(s):  
Terumasa Aoki ◽  
Van Nguyen

Automatic colorization is generally classified into two groups: propagation-based methods and reference-based methods. In reference-based automatic colorization methods, color image(s) are used as reference(s) to reconstruct original color of a gray target image. The most important task here is to find the best matching pairs for all pixels between reference and target images in order to transfer color information from reference to target pixels. A lot of attractive local feature-based image matching methods have already been developed for the last two decades. Unfortunately, as far as we know, there are no optimal matching methods for automatic colorization because the requirements for pixel matching in automatic colorization are wholly different from those for traditional image matching. To design an efficient matching algorithm for automatic colorization, clustering pixel with low computational cost and generating descriptive feature vector are the most important challenges to be solved. In this paper, we present a novel method to address these two problems. In particular, our work concentrates on solving the second problem (designing a descriptive feature vector); namely, we will discuss how to learn a descriptive texture feature using scaled sparse texture feature combining with a nonlinear transformation to construct an optimal feature descriptor. Our experimental results show our proposed method outperforms the state-of-the-art methods in terms of robustness for color reconstruction for automatic colorization applications.


Author(s):  
Baoquan Wang ◽  
Tonghai Jiang ◽  
Xi Zhou ◽  
Bo Ma ◽  
Fan Zhao ◽  
...  

For abnormal detection of time series data, the supervised anomaly detection methods require labeled data. While the range of outlier factors used by the existing semi-supervised methods varies with data, model and time, the threshold for determining abnormality is difficult to obtain, in addition, the computational cost of the way to calculate outlier factors from other data points in the data set is also very large. These make such methods difficult to practically apply. This paper proposes a framework named LSTM-VE which uses clustering combined with visualization method to roughly label normal data, and then uses the normal data to train long short-term memory (LSTM) neural network for semi-supervised anomaly detection. The variance error (VE) of the normal data category classification probability sequence is used as outlier factor. The framework enables anomaly detection based on deep learning to be practically applied and using VE avoids the shortcomings of existing outlier factors and gains a better performance. In addition, the framework is easy to expand because the LSTM neural network can be replaced with other classification models. Experiments on the labeled and real unlabeled data sets prove that the framework is better than replicator neural networks with reconstruction error (RNN-RS) and has good scalability as well as practicability.


Author(s):  
HONGQING ZHU ◽  
MIN LIU ◽  
YU LI ◽  
HUAZHONG SHU ◽  
HUI ZHANG

This paper presents two new sets of nonseparable discrete orthogonal Charlier and Meixner moments describing the images with noise and that are noise-free. The basis functions used by the proposed nonseparable moments are bivariate Charlier or Meixner polynomials introduced by Tratnik et al. This study discusses the computational aspects of discrete orthogonal Charlier and Meixner polynomials, including the recurrence relations with respect to variable x and order n. The purpose is to avoid large variation in the dynamic range of polynomial values for higher order moments. The implementation of nonseparable Charlier and Meixner moments does not involve any numerical approximation, since the basis function of the proposed moments is orthogonal in the image coordinate space. The performances of Charlier and Meixner moments in describing images were investigated in terms of the image reconstruction error, and the results of the experiments on the noise sensitivity are given.


Author(s):  
A. YAZDANI ◽  
V. NASSEHI

This paper presents a technique for deriving least-squares-based polynomial bubble functions to enrich the standard linear finite elements, employed in the formulation of Galerkin weighted-residual statements. The element-level linear shape functions are enhanced using supplementary polynomial bubble functions with undetermined coefficients. The enhanced shape functions are inserted into the model equation and the residual functional is constructed and minimized by using the method of the least squares, resulting in an algebraic system of equations which can be solved to determine the unknown polynomial coefficients in terms of element-level nodal values. The stiffness matrices are subsequently formed with the standard finite elements assembly procedures followed by using these enriched elements which require no additional nodes to be introduced and no extra degree of freedom incurred. Furthermore, the proposed technique is tested on a number of benchmark linear transport equations where the quadratic and cubic bubble functions are derived and the numerical results are compared against the exact and standard linear element solutions. It is demonstrated that low order bubble enriched elements provide more accurate approximations for the exact analytical solutions than the standard linear elements at no extra computational cost in spite of using relatively crude meshes. On the other hand, it is observed that a satisfactory solution of the strongly convection-dominated transport problems may require element enrichment by using significantly higher order polynomial bubble functions in addition to the use of extremely fine computational meshes.


10.37236/2013 ◽  
2011 ◽  
Vol 18 (2) ◽  
Author(s):  
William Y.C. Chen ◽  
Qing-Hu Hou ◽  
Hai-Tao Jin

By combining Abel's lemma on summation by parts with Zeilberger's algorithm, we give an algorithm, called the Abel-Zeilberger algorithm, to find recurrence relations for definite summations. The role of Abel's lemma can be extended to the case of linear difference operators with polynomial coefficients. This approach can be used to verify and discover identities involving harmonic numbers and derangement numbers. As examples, we use the Abel-Zeilberger algorithm to prove the Paule-Schneider identities, an identity of Andrews and Paule, and an identity of Calkin.


Geophysics ◽  
2018 ◽  
Vol 83 (1) ◽  
pp. T31-T38 ◽  
Author(s):  
Wim A. Mulder

One way to deal with the storage problem for the forward source wavefield in reverse time migration and full-waveform inversion is the reconstruction of that wavefield during reverse time stepping along with the receiver wavefield. Apart from the final states of the source wavefield, this requires a strip of boundary values for the whole time range in the presence of absorbing boundaries. The width of the stored boundary strip, positioned in between the interior domain of interest and the absorbing boundary region, usually equals about half that of the finite-difference stencil. The required storage in 3D with high frequencies can still lead to a decrease in computational efficiency, despite the substantial reduction in data volume compared with storing the source wavefields at all or at appropriately subsampled time steps. We have developed a method that requires a boundary strip with a width of just one point and has a negligible loss of accuracy. Stored boundary values over time enable the computation of the second and higher even spatial derivatives normal to the boundary, which together with extrapolation from the interior provides stability and accuracy. Numerical tests show that the use of only the boundary values provides at most fourth-order accuracy for the reconstruction error in the source wavefield. The use of higher even normal derivatives, reconstructed from the stored boundary values, allows for higher orders as numerical examples up to order 26 demonstrate. Subsampling in time is feasible with high-order interpolation and provides even more storage reduction but at a higher computational cost.


2019 ◽  
Vol 2019 ◽  
pp. 1-19 ◽  
Author(s):  
Xiao Wei ◽  
Haichao Chang ◽  
Baiwei Feng ◽  
Zuyuan Liu

In order to truly reflect the ship performance under the influence of uncertainties, uncertainty-based design optimization (UDO) for ships that fully considers various uncertainties in the early stage of design has gradually received more and more attention. Meanwhile, it also brings high dimensionality problems, which may result in inefficient and impractical optimization. Sensitivity analysis (SA) is a feasible way to alleviate this problem, which can qualitatively or quantitatively evaluate the influence of the model input uncertainty on the model output, so that uninfluential uncertain variables can be determined for the descending dimension to achieve dimension reduction. In this paper, polynomial chaos expansions (PCE) with less computational cost are chosen to directly obtain Sobol' global sensitivity indices by its polynomial coefficients; that is, once the polynomial of the output variable is established, the analysis of the sensitivity index is only the postprocessing of polynomial coefficients. Besides, in order to further reduce the computational cost, for solving the polynomial coefficients of PCE, according to the properties of orthogonal polynomials, an improved probabilistic collocation method (IPCM) based on the linear independence principle is proposed to reduce sample points. Finally, the proposed method is applied to UDO of a bulk carrier preliminary design to ensure the robustness and reliability of the ship.


Vibration ◽  
2021 ◽  
Vol 4 (1) ◽  
pp. 232-247
Author(s):  
Christopher Van Damme ◽  
Alecio Madrid ◽  
Matthew Allen ◽  
Joseph Hollkamp

High fidelity finite element (FE) models are widely used to simulate the dynamic responses of geometrically nonlinear structures. The high computational cost of running long time duration analyses, however, has made nonlinear reduced order models (ROMs) attractive alternatives. While there are a variety of reduced order modeling techniques, in general, their shared goal is to project the nonlinear response of the system onto a smaller number of degrees of freedom. Implicit Condensation (IC), a popular and non-intrusive technique, identifies the ROM parameters by fitting a polynomial model to static force-displacement data from FE model simulations. A notable drawback of these models, however, is that the number of polynomial coefficients increases cubically with the number of modes included within the basis set of the ROM. As a result, model correlation, updating and validation become increasingly more expensive as the size of the ROM increases. This work presents simultaneous regression and selection as a method for filtering the polynomial coefficients of a ROM based on their contributions to the nonlinear response. In particular, this work utilizes the method of least absolute shrinkage and selection (LASSO) to identify a sparse set of ROM coefficients during the IC regression step. Cross-validation is used to demonstrate accuracy of the sparse models over a range of loading conditions.


Sign in / Sign up

Export Citation Format

Share Document