lr parsers
Recently Published Documents


TOTAL DOCUMENTS

45
(FIVE YEARS 2)

H-INDEX

9
(FIVE YEARS 0)

Author(s):  
Eddie A Santos ◽  
Joshua C Campbell ◽  
Abram Hindle ◽  
José Nelson Amaral

Minor syntax errors are made by novice and experienced programmers alike; however, novice programmers lack the years of intuition that help them resolve these tiny errors. Standard LR parsers typically resolve syntax errors and their precise location poorly. We propose a methodology that helps locate where syntax errors occur, but also suggests possible changes to the token stream that can fix the error identified. This methodology finds syntax errors by checking if two language models “agree” on each token. If the models disagree, it indicates a possible syntax error; the methodology tries to suggest a fix by finding an alternative token sequence obtained from the models. We trained two LSTM (Long short-term memory) language models on a large corpus of JavaScript code collected from GitHub. The dual LSTM neural network model predicts the correct location of the syntax error 54.74% in its top 4 suggestions and produces an exact fix up to 35.50% of the time. The results show that this tool and methodology can locate and suggest corrections for syntax errors. Our methodology is of practical use to all programmers, but will be especially useful to novices frustrated with incomprehensible syntax errors.


Author(s):  
Eddie A Santos ◽  
Joshua C Campbell ◽  
Abram Hindle ◽  
José Nelson Amaral

Minor syntax errors are made by novice and experienced programmers alike; however, novice programmers lack the years of intuition that help them resolve these tiny errors. Standard LR parsers typically resolve syntax errors and their precise location poorly. We propose a methodology that helps locate where syntax errors occur, but also suggests possible changes to the token stream that can fix the error identified. This methodology finds syntax errors by checking if two language models “agree” on each token. If the models disagree, it indicates a possible syntax error; the methodology tries to suggest a fix by finding an alternative token sequence obtained from the models. We trained two LSTM (Long short-term memory) language models on a large corpus of JavaScript code collected from GitHub. The dual LSTM neural network model predicts the correct location of the syntax error 54.74% in its top 4 suggestions and produces an exact fix up to 35.50% of the time. The results show that this tool and methodology can locate and suggest corrections for syntax errors. Our methodology is of practical use to all programmers, but will be especially useful to novices frustrated with incomprehensible syntax errors.


Author(s):  
Angelo Borsotti ◽  
Luca Breveglieri ◽  
Stefano Crespi Reghizzi ◽  
Angelo Morzenti
Keyword(s):  

2006 ◽  
Vol 148 (2) ◽  
pp. 155-180 ◽  
Author(s):  
François Pottier ◽  
Yann Régis-Gianas
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document