scholarly journals Preference Net: Image Recognition using Ranking Reduction to Classification

Author(s):  
Ayman Elgharabawy ◽  
Mukesh Prasad ◽  
Chin-Teng Lin

Accuracy and computational cost are the main challenges of deep neural networks in image recognition. This paper proposes an efficient ranking reduction to binary classification approach using a new feed-forward network and feature selection based on ranking the image pixels. Preference net (PN) is a novel deep ranking learning approach based on Preference Neural Network (PNN), which uses new ranking objective function and positive smooth staircase (PSS) activation function to accelerate the image pixels’ ranking. PN has a new type of weighted kernel based on spearman ranking correlation instead of convolution to build the features matrix. The PN employs multiple kernels that have different sizes to partial rank image pixels’ in order to find the best features sequence. PN consists of multiple PNNs’ have shared output layer. Each ranker kernel has a separate PNN. The output results are converted to classification accuracy using the score function. PN has promising results comparing to the latest deep learning (DL) networks using the weighted average ensemble of each PN models for each kernel on CFAR-10 and Mnist-Fashion datasets in terms of accuracy and less computational cost.

2000 ◽  
Vol 12 (11) ◽  
pp. 2655-2684 ◽  
Author(s):  
Manfred Opper ◽  
Ole Winther

We derive a mean-field algorithm for binary classification with gaussian processes that is based on the TAP approach originally proposed in statistical physics of disordered systems. The theory also yields an approximate leave-one-out estimator for the generalization error, which is computed with no extra computational cost. We show that from the TAP approach, it is possible to derive both a simpler “naive” mean-field theory and support vector machines (SVMs) as limiting cases. For both mean-field algorithms and support vector machines, simulation results for three small benchmark data sets are presented. They show that one may get state-of-the-art performance by using the leave-one-out estimator for model selection and the built-in leave-one-out estimators are extremely precise when compared to the exact leave-one-out estimate. The second result is taken as strong support for the internal consistency of the mean-field approach.


1989 ◽  
Vol 7 (2-4) ◽  
pp. 363-364
Author(s):  
T. Radil ◽  
G. Nyman ◽  
P. Laurinen ◽  
S. Haikonen

2018 ◽  
Vol 7 (3) ◽  
pp. 367-376
Author(s):  
Ayman Al-Rawashdeh ◽  
Ziad Al-Qadi

Digital color images are now one of the most popular data types used in the digital processing environment. Color image recognition plays an important role in many vital applications, which makes the enhancement of image recognition or retrieval system an important issue. Using color image pixels to recognize or retrieve the image, but the issue of the huge color image size that requires accordingly more time and memory space to perform color image recognition and/or retrieval. In the current study, image local contrast was used to create local contrast victor, which was then used as a key to recognize or retrieve the image. The proposed local contrast method was properly implemented and tested. The obtained results proved its efficiency as compared with other methods.


Author(s):  
Jose Carrillo ◽  
Shi Jin ◽  
Lei Li ◽  
Yuhua Zhu

We improve recently introduced consensus-based optimization method, proposed in [R. Pinnau, C. Totzeck, O. Tse and S. Martin, Math. Models Methods Appl. Sci., 27(01):183{204, 2017], which is a gradient-free optimization method for general nonconvex functions. We rst replace the isotropic geometric Brownian motion by the component-wise one, thus removing the dimensionality dependence of the drift rate, making the method more competitive for high dimensional optimization problems. Secondly, we utilize the random mini-batch ideas to reduce the computational cost of calculating the weighted average which the individual particles tend to relax toward. For its mean- eld limit{a nonlinear Fokker-Planck equation{we prove, in both time continuous and semi-discrete settings, that the convergence of the method, which is exponential in time, is guaranteed with parameter constraints independent of the dimensionality. We also conduct numerical tests to high dimensional problems to check the success rate of the method.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Ziting Pei ◽  
Xuhui Wang ◽  
Xingye Yue

G-expected shortfall (G-ES), which is a new type of worst-case expected shortfall (ES), is defined as measuring risk under infinite distributions induced by volatility uncertainty. Compared with extant notions of the worst-case ES, the G-ES can be computed using an explicit formula with low computational cost. We also conduct backtests for the G-ES. The empirical analysis demonstrates that the G-ES is a reliable risk measure.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
LiMin Wang ◽  
ShuangCheng Wang ◽  
XiongFei Li ◽  
BaoRong Chi

Of the numerous proposals to improve the accuracy of naive Bayes (NB) by weakening the conditional independence assumption, averaged one-dependence estimator (AODE) demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.


Author(s):  
Nicole M. W. Poe ◽  
D. Keith Walters ◽  
Edward A. Luke ◽  
Christopher I. Morris

A numerical method is presented for low-dissipation, high-resolution finite-volume CFD simulations of turbulent flow. The convective fluxes in the governing equations are computed using a conventional upwind-biased second order scheme, with a modified linear reconstruction of face states from neighboring cells. The new scheme, dubbed optimization-based gradient reconstruction (OGRE), incorporates two key enhancements to improve performance. The first is an iterative least-square gradient computation procedure which minimizes the second-order dissipative error contribution to the face reconstruction on structured Cartesian meshes. The second is a slope limiting scheme which enforces local monotonicity near discontinuities without the detrimental effect of limiting in smooth regions of the flowfield. In addition, for density-based methods employing flux-difference splitting for the convective terms, a recently proposed weighted-average for obtaining the reconstructed face variable values is used, which improves accuracy in subsonic flow regions and eliminates the need for preconditioning. The new method has been implemented into the Ansys FLUENT and Loci-CHEM flow solvers, and is validated for several test cases by comparison to a conventional linear reconstruction implementation. Results clearly show the advantage of the new scheme over conventional upwind-biased second order schemes in terms of accuracy, particularly with regard to LES/DNS simulation. The most significant improvement is obtained for Cartesian meshes and low Mach number flows, but all test cases showed some level of improvement using the new scheme. The method is also quantified in terms of increased computational cost versus traditional methods. Based on results shown here, the method appears to represent a viable alternative to currently used centered and blended schemes in terms of accuracy, robustness, and computational expense.


Author(s):  
Abha S Bais ◽  
Dennis Kostka

Abstract Motivation Single-cell RNA sequencing (scRNA-seq) technologies enable the study of transcriptional heterogeneity at the resolution of individual cells and have an increasing impact on biomedical research. However, it is known that these methods sometimes wrongly consider two or more cells as single cells, and that a number of so-called doublets is present in the output of such experiments. Treating doublets as single cells in downstream analyses can severely bias a study’s conclusions, and therefore computational strategies for the identification of doublets are needed. Results With scds, we propose two new approaches for in silico doublet identification: Co-expression based doublet scoring (cxds) and binary classification based doublet scoring (bcds). The co-expression based approach, cxds, utilizes binarized (absence/presence) gene expression data and, employing a binomial model for the co-expression of pairs of genes, yields interpretable doublet annotations. bcds, on the other hand, uses a binary classification approach to discriminate artificial doublets from original data. We apply our methods and existing computational doublet identification approaches to four datasets with experimental doublet annotations and find that our methods perform at least as well as the state of the art, at comparably little computational cost. We observe appreciable differences between methods and across datasets and that no approach dominates all others. In summary, scds presents a scalable, competitive approach that allows for doublet annotation of datasets with thousands of cells in a matter of seconds. Availability and implementation scds is implemented as a Bioconductor R package (doi: 10.18129/B9.bioc.scds). Supplementary information Supplementary data are available at Bioinformatics online.


Aerospace ◽  
2020 ◽  
Vol 7 (8) ◽  
pp. 106
Author(s):  
Bereket Sitotaw Kidane ◽  
Enrico Troiani

Wing shape adaptability during flight is the next step towards the greening of aviation. The shape of the wing is typically designed for one cruise point or a weighted average of several cruise points. However, a wing is subjected to a variety of flight conditions, which results in the aircraft flying sub-optimally during a portion of the flight. Shape adaptability can be achieved by tuning the shape of the winglet during flight. The design challenge is to combine a winglet structure that is able to allow the required adaptable shape while preserving the structural integrity to carry the aerodynamic loads. The shape changing actuators must work against the structural strains and the aerodynamic loads. Analyzing the full model in the preliminary design phase is computationally expensive; therefore, it is necessary to develop a model. The goal of this paper is to derive an aeroelastic model for a wing and winglet in order to reduce the computational cost and complexity of the system in designing a folding winglet. In this paper, the static aeroelastic analysis is performed for a regional aircraft wing at sea level and service ceiling conditions with three degree and eight degree angle of attack. MSC Nastran Aeroelastic tool is used to develop a Finite Element Model (FEM), i.e., beam model and the aerodynamic loads are calculated based on a doublet lattice panel method (DLM).


2013 ◽  
Vol 347-350 ◽  
pp. 2580-2585
Author(s):  
Yuan Hua Guo ◽  
Chun Lun Huang

In this paper, FLANN(functional link ANN) filter is presented for Gaussian noise. FLANN is a singer layer with expanded input vectors and has lower computational cost than MLP(multilayer perceptron). Three types of functional expansion are discussed. BP(back propagation algorithm) for nonlinear activation function and matrix calculation for identical activation function are exploited for training FLANN. Simulation shows that convergence is not guaranteed in BP and related to the initial weight matrix and training images, and that linear FLANN trained by matrix calculation performs better than both nonlinear FLANN trained by BP and Wiener filter in detail region in environment of Gaussian noise


Sign in / Sign up

Export Citation Format

Share Document