Generalized LR Parsing Algorithm for Grammars with One-Sided Contexts

2016 ◽  
Vol 61 (2) ◽  
pp. 581-605 ◽  
Author(s):  
Mikhail Barash ◽  
Alexander Okhotin
Keyword(s):  
2006 ◽  
Vol 17 (03) ◽  
pp. 629-664 ◽  
Author(s):  
ALEXANDER OKHOTIN

The generalized LR parsing algorithm for context-free grammars is extended for the case of Boolean grammars, which are a generalization of the context-free grammars with logical connectives added to the formalism of rules. In addition to the standard LR operations, Shift and Reduce, the new algorithm uses a third operation called Invalidate, which reverses a previously made reduction. This operation makes the mathematical justification of the algorithm significantly different from its prototype. On the other hand, the changes in the implementation are not very substantial, and the algorithm still works in time O(n4).


1995 ◽  
Vol 2 (2) ◽  
pp. 59-74 ◽  
Author(s):  
Hozumi Tanaka ◽  
Takenobu Tokunaga ◽  
Michio Aizawa

1991 ◽  
pp. 1-16 ◽  
Author(s):  
Masaru Tomita ◽  
See-Kiong Ng
Keyword(s):  

1995 ◽  
Vol 55 (3-4) ◽  
pp. 135-153 ◽  
Author(s):  
Benjamin R. Seyfarth ◽  
Manuel E. Bermudez
Keyword(s):  

2013 ◽  
Vol 2013 ◽  
pp. 1-7
Author(s):  
Guo-Rong Cai ◽  
Shui-Li Chen

This paper presents an image parsing algorithm which is based on Particle Swarm Optimization (PSO) and Recursive Neural Networks (RNNs). State-of-the-art method such as traditional RNN-based parsing strategy uses L-BFGS over the complete data for learning the parameters. However, this could cause problems due to the nondifferentiable objective function. In order to solve this problem, the PSO algorithm has been employed to tune the weights of RNN for minimizing the objective. Experimental results obtained on the Stanford background dataset show that our PSO-based training algorithm outperforms traditional RNN, Pixel CRF, region-based energy, simultaneous MRF, and superpixel MRF.


Author(s):  
Emily Pitler ◽  
Sampath Kannan ◽  
Mitchell Marcus

Dependency parsing algorithms capable of producing the types of crossing dependencies seen in natural language sentences have traditionally been orders of magnitude slower than algorithms for projective trees. For 95.8–99.8% of dependency parses in various natural language treebanks, whenever an edge is crossed, the edges that cross it all have a common vertex. The optimal dependency tree that satisfies this 1-Endpoint-Crossing property can be found with an O( n4) parsing algorithm that recursively combines forests over intervals with one exterior point. 1-Endpoint-Crossing trees also have natural connections to linguistics and another class of graphs that has been studied in NLP.


Sign in / Sign up

Export Citation Format

Share Document