A genetic feature weighting scheme for pattern recognition

2007 ◽  
Vol 14 (2) ◽  
pp. 161-171 ◽  
Author(s):  
Heesung Lee ◽  
Euntai Kim ◽  
Mignon Park
2012 ◽  
Vol 97 ◽  
pp. 332-343 ◽  
Author(s):  
Isaac Triguero ◽  
Joaquín Derrac ◽  
Salvador García ◽  
Francisco Herrera

2021 ◽  
pp. 1-12
Author(s):  
K. Seethappan ◽  
K. Premalatha

Although there have been various researches in the detection of different figurative language, there is no single work in the automatic classification of euphemisms. Our primary work is to present a system for the automatic classification of euphemistic phrases in a document. In this research, a large dataset consisting of 100,000 sentences is collected from different resources for identifying euphemism or non-euphemism utterances. In this work, several approaches are focused to improve the euphemism classification: 1. A Combination of lexical n-gram features 2.Three Feature-weighting schemes 3.Deep learning classification algorithms. In this paper, four machine learning (J48, Random Forest, Multinomial Naïve Bayes, and SVM) and three deep learning algorithms (Multilayer Perceptron, Convolutional Neural Network, and Long Short-Term Memory) are investigated with various combinations of features and feature weighting schemes to classify the sentences. According to our experiments, Convolutional Neural Network (CNN) achieves precision 95.43%, recall 95.06%, F-Score 95.25%, accuracy 95.26%, and Kappa 0.905 by using a combination of unigram and bigram features with TF-IDF feature weighting scheme in the classification of euphemism. These results of experiments show CNN with a strong combination of unigram and bigram features set with TF-IDF feature weighting scheme outperforms another six classification algorithms in detecting the euphemisms in our dataset.


Author(s):  
Tobias Sombra ◽  
Rose Santini ◽  
Emerson Morais ◽  
Walmir Couto ◽  
Alex Zissou ◽  
...  

Quantitative evaluation of a dataset can play an important role in pattern recognition of technical-scientific research involving behavior and dynamics in social networks. As an example, are the adaptive feature weighting approaches by naive Bayes text algorithm. This work aims to present an exploratory data analysis with a quantitative approach that involves pattern recognition using the Mendeley research network; to identify logics given the popularity of document access. To better analyze the results, the work was divided into four categories, each with three subcategories, that is, five, three, and two output classes. The name for these categories came up due to data collection, which also presented documents with open access, dismembering proceedings, and journals for two more categories. As a result, the performance for the test examples showed a lower error rate related to the subcategory two output classes in the criterion of popularity by using the naive Bayes algorithm in Mendeley.


2019 ◽  
Vol 7 (1) ◽  
pp. 238-250
Author(s):  
Adarsh S R

Human Computer Interaction (HCI) researches the use of computer technology mainly focused on the interfaces between human users and computers. Expression of emotion comprises of challenging style as it is produced with plaint text and short messaging language as well. This research paper investigates on the overview of emotion recognition from various texts and expresses the emotion detection methodologies applying Machine Learning Approach (MLA). This paper recommends resolving the problem of feature meagerness, and largely improving the emotion recognition presentation from short texts by achieving the three aims: (I) The representing short texts along with word cluster features, (II) Presenting a narrative word clustering algorithm, and (iii) Making use of a new feature weighting scheme of the Emotion classification. Experiments were performed for the classifying the emotions with different features and weighting schemes, on the openly available dataset. We have used the word clusters in place of unigrams as features, the micro-averages of accuracy have been found to be enhanced by more than three percentage, which suggests that the overall accuracy value of the text emotion classifier has been improved. All the macro-averages were enhanced by more than one percentage, which suggests that the word cluster feature can advance the generalization potential of the emotion classifier. The experimental results suggest that the text words cluster features and the proposed weighting scheme can moderately resolve the problems of the emotion recognition performance and the feature sparseness.


2016 ◽  
Vol 13 (1) ◽  
pp. 286-293 ◽  
Author(s):  
Longjia Jia ◽  
Tieli Sun ◽  
Fengqin Yang ◽  
Hongguang Sun ◽  
Bangzuo Zhang

Author(s):  
G.Y. Fan ◽  
J.M. Cowley

In recent developments, the ASU HB5 has been modified so that the timing, positioning, and scanning of the finely focused electron probe can be entirely controlled by a host computer. This made the asynchronized handshake possible between the HB5 STEM and the image processing system which consists of host computer (PDP 11/34), DeAnza image processor (IP 5000) which is interfaced with a low-light level TV camera, array processor (AP 400) and various peripheral devices. This greatly facilitates the pattern recognition technique initiated by Monosmith and Cowley. Software called NANHB5 is under development which, instead of employing a set of photo-diodes to detect strong spots on a TV screen, uses various software techniques including on-line fast Fourier transform (FFT) to recognize patterns of greater complexity, taking advantage of the sophistication of our image processing system and the flexibility of computer software.


Author(s):  
L. Fei ◽  
P. Fraundorf

Interface structure is of major interest in microscopy. With high resolution transmission electron microscopes (TEMs) and scanning probe microscopes, it is possible to reveal structure of interfaces in unit cells, in some cases with atomic resolution. A. Ourmazd et al. proposed quantifying such observations by using vector pattern recognition to map chemical composition changes across the interface in TEM images with unit cell resolution. The sensitivity of the mapping process, however, is limited by the repeatability of unit cell images of perfect crystal, and hence by the amount of delocalized noise, e.g. due to ion milling or beam radiation damage. Bayesian removal of noise, based on statistical inference, can be used to reduce the amount of non-periodic noise in images after acquisition. The basic principle of Bayesian phase-model background subtraction, according to our previous study, is that the optimum (rms error minimizing strategy) Fourier phases of the noise can be obtained provided the amplitudes of the noise is given, while the noise amplitude can often be estimated from the image itself.


Sign in / Sign up

Export Citation Format

Share Document