On the Scalability of Genetic Algorithms to Very Large-Scale Feature Selection

Author(s):  
Andreas Moser ◽  
M. Narasimha Murty
1989 ◽  
Vol 10 (5) ◽  
pp. 335-347 ◽  
Author(s):  
W. Siedlecki ◽  
J. Sklansky

Author(s):  
JACK SKLANSKY ◽  
WOJCIECH SIEDLECKI

2018 ◽  
Vol 73 ◽  
pp. 171-178 ◽  
Author(s):  
Mohammad K. Ebrahimpour ◽  
Hossein Nezamabadi-pour ◽  
Mahdi Eftekhari

2012 ◽  
pp. 352-370 ◽  
Author(s):  
Jeremy Kubica ◽  
Sameer Singh ◽  
Daria Sorokina

2013 ◽  
Vol 46 ◽  
pp. 203-233 ◽  
Author(s):  
H. Zhao ◽  
X. Zhang ◽  
C. Kit

Semantic parsing, i.e., the automatic derivation of meaning representation such as an instantiated predicate-argument structure for a sentence, plays a critical role in deep processing of natural language. Unlike all other top systems of semantic dependency parsing that have to rely on a pipeline framework to chain up a series of submodels each specialized for a specific subtask, the one presented in this article integrates everything into one model, in hopes of achieving desirable integrity and practicality for real applications while maintaining a competitive performance. This integrative approach tackles semantic parsing as a word pair classification problem using a maximum entropy classifier. We leverage adaptive pruning of argument candidates and large-scale feature selection engineering to allow the largest feature space ever in use so far in this field, it achieves a state-of-the-art performance on the evaluation data set for CoNLL-2008 shared task, on top of all but one top pipeline system, confirming its feasibility and effectiveness.


Sign in / Sign up

Export Citation Format

Share Document