scholarly journals Reliable Recurrence Algorithm for High-Order Krawtchouk Polynomials

Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1162
Author(s):  
Khaled A. AL-Utaibi ◽  
Sadiq H. Abdulhussain ◽  
Basheera M. Mahmmod ◽  
Marwah Abdulrazzaq Naser ◽  
Muntadher Alsabah ◽  
...  

Krawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the initial value of the KP parameter. In addition, a new diagonal recurrence relation is introduced and used in the proposed algorithm. The diagonal recurrence algorithm was derived from the existing n direction and x direction recurrence algorithms. The diagonal and existing recurrence algorithms were subsequently exploited to compute the KP coefficients. First, the KP coefficients were computed for one partition after dividing the KP plane into four. To compute the KP coefficients in the other partitions, the symmetry relations were exploited. The performance evaluation of the proposed recurrence algorithm was determined through different comparisons which were carried out in state-of-the-art works in terms of reconstruction error, polynomial size, and computation cost. The obtained results indicate that the proposed algorithm is reliable and computes lesser coefficients when compared to the existing algorithms across wide ranges of parameter values of p and polynomial sizes N. The results also show that the improvement ratio of the computed coefficients ranges from 18.64% to 81.55% in comparison to the existing algorithms. Besides this, the proposed algorithm can generate polynomials of an order ∼8.5 times larger than those generated using state-of-the-art algorithms.

2020 ◽  
Vol 6 (8) ◽  
pp. 81 ◽  
Author(s):  
Basheera M. Mahmmod ◽  
Alaa M. Abdul-Hadi ◽  
Sadiq H. Abdulhussain ◽  
Aseel Hussien

Discrete Krawtchouk polynomials are widely utilized in different fields for their remarkable characteristics, specifically, the localization property. Discrete orthogonal moments are utilized as a feature descriptor for images and video frames in computer vision applications. In this paper, we present a new method for computing discrete Krawtchouk polynomial coefficients swiftly and efficiently. The presented method proposes a new initial value that does not tend to be zero as the polynomial size increases. In addition, a combination of the existing recurrence relations is presented which are in the n- and x-directions. The utilized recurrence relations are developed to reduce the computational cost. The proposed method computes approximately 12.5% of the polynomial coefficients, and then symmetry relations are employed to compute the rest of the polynomial coefficients. The proposed method is evaluated against existing methods in terms of computational cost and maximum size can be generated. In addition, a reconstruction error analysis for image is performed using the proposed method for large signal sizes. The evaluation shows that the proposed method outperforms other existing methods.


2020 ◽  
Author(s):  
Zhe Yang ◽  
Dejan Gjorgjevikj ◽  
Jian-Yu Long ◽  
Yan-Yang Zi ◽  
Shao-Hui Zhang ◽  
...  

Abstract Novelty detection is a challenging task for the machinery fault diagnosis. A novel fault diagnostic method is developed for dealing with not only diagnosing the known type of defect, but also detecting novelties, i.e. the occurrence of new types of defects which have never been recorded. To this end, a sparse autoencoder-based multi-head Deep Neural Network (DNN) is presented to jointly learn a shared encoding representation for both unsupervised reconstruction and supervised classification of the monitoring data. The detection of novelties is based on the reconstruction error. Moreover, the computational burden is reduced by directly training the multi-head DNN with rectified linear unit activation function, instead of performing the pre-training and fine-tuning phases required for classical DNNs. The addressed method is applied to a benchmark bearing case study and to experimental data acquired from a delta 3D printer. The results show that it is able to accurately diagnose known types of defects, as well as to detect unknown defects, outperforming other state-of-the-art methods.


Author(s):  
Wenqiang Yuan ◽  
Yusheng Liu

In this work, we present a new multi-objective particle swarm optimization algorithm (PSO) characterized by the use of the geometrization analysis of the particles. The proposed method, called geometry analysis PSO (GAPSO), firstly parameterize the data points of the optimization model of mechatronic system to obtain their parameter values, then one curve or one surface is adopted to fit these points and the tangent value and normal value for each point are acquired, eventually the particles are guided by the use of its derivative value and tangent value to approximate the true Pareto front and get a uniform distribution. Our proposed method is compared with respect to two multi-objective metaheuristics representative of the state-of-the-art in this area. The experiments carried out indicate that GAPSO obtains remarkable results in terms of both accuracy and distribution.


Author(s):  
Liliane do Nascimento Vale ◽  
Marcelo de Almeida Maia

Inadequate documentation of software design has been known to be a barrier for developers. Interestingly, several relevant object-oriented systems have their design documented using key classes, which are meant to represent key concepts of the systems. In order to fill the gap of under-documented design, we present Keecle, an approach for detecting a predefined number of key classes in a semi-automatic way. The main challenge is to reduce the space of potentially thousands of classes to just a few representatives of the main concepts of a system, while maintaining high precision. The approach is evaluated with 13 systems in order to assess its correctness. The ground-truth is obtained either from the original documentation, or from third-party, or from the respective developers. The results were analyzed in terms of precision and recall, and have shown to be superior compared to the state-of-the-art approach. In order to evaluate if key classes are more critical from the design point of view, we evaluated whether they are associated with cohesion and coupling metrics. We found that although key classes, in general, are critical from the point of view of design, there are other classes that are also critical, suggesting that being aware of key classes encompass information not available in structural metrics, and could be useful as a additional facet for design assessment.


2012 ◽  
Vol 24 (11) ◽  
pp. 2900-2923 ◽  
Author(s):  
A. Llera ◽  
V. Gómez ◽  
H. J. Kappen

We introduce a probabilistic model that combines a classifier with an extra reinforcement signal (RS) encoding the probability of an erroneous feedback being delivered by the classifier. This representation computes the class probabilities given the task related features and the reinforcement signal. Using expectation maximization (EM) to estimate the parameter values under such a model shows that some existing adaptive classifiers are particular cases of such an EM algorithm. Further, we present a new algorithm for adaptive classification, which we call constrained means adaptive classifier, and show using EEG data and simulated RS that this classifier is able to significantly outperform state-of-the-art adaptive classifiers.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1415
Author(s):  
Dongqi Luo ◽  
Binqiang Si ◽  
Saite Zhang ◽  
Fan Yu ◽  
Jihong Zhu

In this paper, we focus on the bandlimited graph signal sampling problem. To sample graph signals, we need to find small-sized subset of nodes with the minimal optimal reconstruction error. We formulate this problem as a subset selection problem, and propose an efficient Pareto Optimization for Graph Signal Sampling (POGSS) algorithm. Since the evaluation of the objective function is very time-consuming, a novel acceleration algorithm is proposed in this paper as well, which accelerates the evaluation of any solution. Theoretical analysis shows that POGSS finds the desired solution in quadratic time while guaranteeing nearly the best known approximation bound. Empirical studies on both Erdos-Renyi graphs and Gaussian graphs demonstrate that our method outperforms the state-of-the-art greedy algorithms.


Author(s):  
Xiangteng He ◽  
Yuxin Peng ◽  
Junjie Zhao

Fine-grained visual categorization (FGVC) is the discrimination of similar subcategories, whose main challenge is to localize the quite subtle visual distinctions between similar subcategories. There are two pivotal problems: discovering which region is discriminative and representative, and determining how many discriminative regions are necessary to achieve the best performance. Existing methods generally solve these two problems relying on the prior knowledge or experimental validation, which extremely restricts the usability and scalability of FGVC. To address the "which" and "how many" problems adaptively and intelligently, this paper proposes a stacked deep reinforcement learning approach (StackDRL). It adopts a two-stage learning architecture, which is driven by the semantic reward function. Two-stage learning localizes the object and its parts in sequence ("which"), and determines the number of discriminative regions adaptively ("how many"), which is quite appealing in FGVC. Semantic reward function drives StackDRL to fully learn the discriminative and conceptual visual information, via jointly combining the attention-based reward and category-based reward. Furthermore, unsupervised discriminative localization avoids the heavy labor consumption of labeling, and extremely strengthens the usability and scalability of our StackDRL approach. Comparing with ten state-of-the-art methods on CUB-200-2011 dataset, our StackDRL approach achieves the best categorization accuracy.


Author(s):  
Ridhi Arora ◽  
Vipul Bansal ◽  
Himanshu Buckchash ◽  
Rahul Kumar ◽  
Vinodh J Sahayasheela ◽  
...  

<div>According to WHO, COVID-19 is an infectious disease and has a significant social and economic impact. The main challenge in ?fighting against this disease is its scale. Due to the imminent outbreak, the medical facilities are over exhausted and unable to accommodate the piling cases. A quick diagnosis system is required to address these challenges. To this end, a stochastic deep learning model is proposed. The main idea is to constrain the deep representations over a gaussian prior to reinforce the discriminability in feature space. The model can work on chest X-ray or CT-scan images. It provides</div><div>a fast diagnosis of COVID-19 and can scale seamlessly. This work presents a comprehensive evaluation of previously proposed approaches for X-ray based</div><div>disease diagnosis. Our approach works by learning a latent space over X-ray image distribution from the ensemble of state-of-the-art convolutional-nets,</div><div>and then linearly regressing the predictions from an ensemble of classifi?ers which take the latent vector as input. We experimented with publicly available datasets having three classes { COVID-19, normal, Pneumonia. Moreover, for robust evaluation, experiments were performed on a large chest X-ray dataset with fi?ve different very similar diseases. Extensive empirical evaluation shows</div><div>how the proposed approach advances the state-of-the-art.</div>


2020 ◽  
Vol 15 (3) ◽  
pp. 1-5
Author(s):  
Evelyn Cristina de Oliveira Lima ◽  
André Borges Cavalcante ◽  
João Viana Da Fonseca Neto

One important step of the optimization of analog circuits is to properly size circuit components. Since the quantities that define specification may compete for different circuit parameter values, the optimization of analog circuits befits a hard and costly optimization problem. In this work, we propose two contributions to design automation methodologies based on machine learning. Firstly, we propose a probability annealing policy to boost early data collection and restrict electronic simulations later on in the optimization. Secondly, we employ multiple gradient boosted trees to predict design superiority, which reduces overfitting to learned designs. When compared to the state-of-the art, our approach reduces the number of electronic simulations, the number of queries made to the machine learning module required to finish the optimization.


Author(s):  
Ming Xu ◽  
James Yang

The finite element method (FEM) has been used in human facial modeling both in clinical and engineering fields for decades. Applications of human head modeling include the interaction of personal protective equipment with the human head and modeling head impact. In human head modeling, it is critical to have a high fidelity model including accurate thicknesses of each layer and accurate material properties. Various experiments have been performed but do not report consistent results; therefore, it is difficult to find reliable parameter values to create an effective model of the human head. This paper attempts to review and summarize the state of the art of human facial studies including experimental measurements of different layer thicknesses and the mechanical properties of these layers.


Sign in / Sign up

Export Citation Format

Share Document