scholarly journals GCNGAN: Translating Natural Language to Programming Language based on GAN

2021 ◽  
Vol 1873 (1) ◽  
pp. 012070
Author(s):  
Hongming Dai ◽  
Chen Chen ◽  
Yunjing Li ◽  
Yanghao Yuan
Author(s):  
Xiao Liu ◽  
Dinghao Wu

Programming remains a dark art for beginners or even professional programmers. Experience indicates that one of the first barriers for learning a new programming language is the rigid and unnatural syntax and semantics. After analysis of research on the language features used by non-programmers in describing problem solving, the authors propose a new program synthesis framework, dialog-based programming, which interprets natural language descriptions into computer programs without forcing the input formats. In this chapter, they describe three case studies that demonstrate the functionalities of this program synthesis framework and show how natural language alleviates challenges for novice programmers to conduct software development, scripting, and verification.


Author(s):  
Yudong Zhang ◽  
Wenhao Zheng ◽  
Ming Li

Semantic feature learning for natural language and programming language is a preliminary step in addressing many software mining tasks. Many existing methods leverage information in lexicon and syntax to learn features for textual data. However, such information is inadequate to represent the entire semantics in either text sentence or code snippet. This motivates us to propose a new approach to learn semantic features for both languages, through extracting three levels of information, namely global, local and sequential information, from textual data. For tasks involving both modalities, we project the data of both types into a uniform feature space so that the complementary knowledge in between can be utilized in their representation. In this paper, we build a novel and general-purpose feature learning framework called UniEmbed, to uniformly learn comprehensive semantic representation for both natural language and programming language. Experimental results on three real-world software mining tasks show that UniEmbed outperforms state-of-the-art models in feature learning and prove the capacity and effectiveness of our model.


1992 ◽  
Vol 8 (2) ◽  
pp. 129-153 ◽  
Author(s):  
Judith Segal ◽  
Khurshid Ahmad ◽  
Margaret Rogers

We report on an investigation into the systematic errors made by a large group of programming language students over a period of two years. The investigation and the learner-centered longitudinal study of which it formed a part, were both inspired by recent research into second natural language acquisition. The results of the investigation demonstrate that students had major difficulties using the semicolon, the sequencing operator of the programming language ALGOL 68. We argue that this difficulty is due to the fact that students did not immediately understand a specific, simply stated rule of syntax, introduced in a decontextualized way, but rather that their understanding of the rule developed with their increasing experience of using it in different contexts. We suggest that such systematic low-level syntactic errors may be indicative of higher-level misconceptions regarding the structure of the language.


1980 ◽  
Vol 3 (3) ◽  
pp. 269-293
Author(s):  
Teodor Rus

This paper is an attempt to direct present-day research in programming language specification and their compiler construction towards a more natural approach. In order to do that, a language is considered for what it is, namely a communication device between systems. In view of this evidence, the first section of the paper develops a framework for the natural specification of language. Section two of the paper develops the HAS-Hierarchy as a device to be used in this natural language specification. Section three of the paper constructs a general model for programming language specification by the HAS-Hierarchy. Section four is devoted to the problem of compiler construction by using the HAS-Hierarchy as a natural tool for programming language specification.


Author(s):  
PASCUAL JULIÁN-IRANZO ◽  
FERNANDO SÁENZ-PÉREZ

Abstarct This paper introduces techniques to integrate WordNet into a Fuzzy Logic Programming system. Since WordNet relates words but does not give graded information on the relation between them, we have implemented standard similarity measures and new directives allowing the proximity equations linking two words to be generated with an approximation degree. Proximity equations are the key syntactic structures which, in addition to a weak unification algorithm, make a flexible query-answering process possible in this kind of programming language. This addition widens the scope of Fuzzy Logic Programming, allowing certain forms of lexical reasoning, and reinforcing Natural Language Processing (NLP) applications.


Author(s):  
Karan Aggarwal ◽  
Mohammad Salameh ◽  
Abram Hindle

In this paper, we have tried to use statistical machine translation in order to convert Python 2 code to Python 3 code. We use data from two projects and achieve a high BLEU score. We also investigate the cross-project training and testing to analyze the errors so as to ascertain differences with previous case. We have described a pilot study on modeling programming languages as natural language to build translation models on the lines of natural languages. This can be further worked on to translate between versions of a programming language or cross-programming-languages code translation.


Author(s):  
Patrick Jeuniaux ◽  
Andrew Olney ◽  
Sidney D’Mello

This chapter is aimed at students and researchers who are eager to learn about practical programmatic solutions to natural language processing (NLP) problems. In addition to introducing the readers to programming basics, programming tools, and complete programs, we also hope to pique their interest to actively explore the broad and fascinating field of automatic natural language processing. Part I introduces programming basics and the Python programming language. Part II takes a step by step approach in illustrating the development of a program to solve a NLP problem. Part III provides some hints to help readers initiate their own NLP programming projects.


Sign in / Sign up

Export Citation Format

Share Document