scholarly journals LinGAN: an Advanced Model for Code Generating based on Linformer

2021 ◽  
Vol 2082 (1) ◽  
pp. 012019
Author(s):  
Hongming Dai

Abstract Parsing natural language to corresponding programming language attracts much attention in recent years. Natural Language to SQL(NL2SQL) widely appears in numerous practical Internet applications. Previous solution was to convert the input as a heterogeneous graph which failed to learn good word representation in question utterance. In this paper, we propose a Relation-Aware framework named LinGAN, which has powerful semantic parsing abilities and can jointly encode the question utterance and syntax information of the object language. We also propose the pre-norm residual shrinkage unit to solve the problem of deep degradation of Linformer. Experiments show that LinGAN achieves excellent performance on multiple code generation tasks.

Author(s):  
Siva Reddy ◽  
Mirella Lapata ◽  
Mark Steedman

In this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs. Our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with Freebase. Given this representation, we conceptualize semantic parsing as a graph matching problem. Our model converts sentences to semantic graphs using CCG and subsequently grounds them to Freebase guided by denotations as a form of weak supervision. Evaluation experiments on a subset of the Free917 and WebQuestions benchmark datasets show our semantic parser improves over the state of the art.


2018 ◽  
Vol 21 (2) ◽  
Author(s):  
Guido Nuñez ◽  
Daniel Bonhaure ◽  
Magalí González ◽  
Nathalie Aquino ◽  
Luca Cernuzzi

Many Web applications have among their features the possibility of distributing their data and their business logic between the client and the server, also allowing an asynchronous communication between them. These features, originally associated with the arrival of Rich Internet Applications (RIA), remain particularly relevant and desirable. In the area of RIA, there are few proposals that simultaneously consider these features, adopt Model-Driven Development (MDD), and use implementation technologies based on scripting. In this work, we start from MoWebA, an MDD approach to web application development, and we extend it by defining a specific architecture model with RIA functionalities, supporting the previously mentioned features. We have defined the necessary metamodels and UML profiles, as well as transformation rules that allow you to generate code based on HTML5, Javascript, jQuery, jQuery Datatables and jQuery UI. The preliminary validation of the proposal shows positive evidences regarding the effectiveness, efficiency and satisfaction of the users with respect to the modeling and code generation processes of the proposal.


Author(s):  
Md. Asifuzzaman Jishan ◽  
Khan Raqib Mahmud ◽  
Abul Kalam Al Azad

We presented a learning model that generated natural language description of images. The model utilized the connections between natural language and visual data by produced text line based contents from a given image. Our Hybrid Recurrent Neural Network model is based on the intricacies of Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Bi-directional Recurrent Neural Network (BRNN) models. We conducted experiments on three benchmark datasets, e.g., Flickr8K, Flickr30K, and MS COCO. Our hybrid model utilized LSTM model to encode text line or sentences independent of the object location and BRNN for word representation, this reduced the computational complexities without compromising the accuracy of the descriptor. The model produced better accuracy in retrieving natural language based description on the dataset.


Model-Driven Development (MDD) tools for Rich Internet Applications (RIAs) development are focused on software modeling, and they leave automatic code generation in a second term. On the other hand, Rapid Application Development (RAD) tools for RIAs development enable developers to save development time and effort by leveraging reusable software components. AlexandRIA is a RAD tool that allows developers to automatically generate both source and native code of multi-device RIAs from a set of preferences selected throughout a wizard following the phases of a User Interface (UI) pattern-based code generation approach for multi-device RIAs. In this chapter, the use of the UI design process behind AlexandRIA is demonstrated by means of a sample development scenario addressing the development of a cloud services Application Programming Interfaces (APIs)-based cross-platform mobile RIA. This scenario is further revisited in a case study that addresses the automatic generation of an equivalent application using AlexandRIA.


Author(s):  
Sarra Roubi ◽  
Mohammed Erramdani ◽  
Samir Mbarki

<p><span lang="EN-US">A Rich Internet Applications (RIAs) combine the simplicity of the hypertext paradigm with the flexibility of desktop interfaces. These appliations were proposed as a solution to follow the rapid growth and evolution of the Graphical User Interfaces. However, RIAs are complex applications and their development requires designing and implementation which are time-consuming and the available tools are specialized in manual design. In this paper, we present a model driven approach to generat GUI for Rich Internet Application. The approach exploits the new language IFML recently adopted by the Object Management Group. We used frameworks and technologies known to Model-Driven Engineering, such as Eclipse Modeling Framework (EMF) for Meta-modeling, Query View Transformation (QVT) for model transformations and Acceleo for code generation. The approach allows to quickly and efficiently generating a RIA focusing on the graphical aspect of the application.</span></p>


Information ◽  
2019 ◽  
Vol 10 (2) ◽  
pp. 66
Author(s):  
Magdalena Kacmajor ◽  
John Kelleher

Open software repositories make large amounts of source code publicly available. Potentially, this source code could be used as training data to develop new, machine learning-based programming tools. For many applications, however, raw code scraped from online repositories does not constitute an adequate training dataset. Building on the recent and rapid improvements in machine translation (MT), one possibly very interesting application is code generation from natural language descriptions. One of the bottlenecks in developing these MT-inspired systems is the acquisition of parallel text-code corpora required for training code-generative models. This paper addresses the problem of automatically synthetizing parallel text-code corpora in the software testing domain. Our approach is based on the observation that self-documentation through descriptive method names is widely adopted in test automation, in particular for unit testing. Therefore, we propose synthesizing parallel corpora comprised of parsed test function names serving as code descriptions, aligned with the corresponding function bodies. We present the results of applying one of the state-of-the-art MT methods on such a generated dataset. Our experiments show that a neural MT model trained on our dataset can generate syntactically correct and semantically relevant short Java functions from quasi-natural language descriptions of functionality.


Sign in / Sign up

Export Citation Format

Share Document