scholarly journals A Model for Naturalistic Programming with Implementation

2019 ◽  
Vol 9 (18) ◽  
pp. 3936 ◽  
Author(s):  
Oscar Pulido-Prieto ◽  
Ulises Juárez-Martínez

While the use of natural language for software development has been proposed since the 1960s, it was limited by the inherent ambiguity of natural languages, which people resolve using reasoning in a text or conversation. Programming languages are formal general-purpose or domain-specific alternatives based on mathematical formalism and which are at a remove from natural language. Over the years, various authors have presented studies in which they attempted to use a subset of the English language for solving particular problems. Each author approached the problem by covering particular domains, rather than focusing on describing general elements that would help other authors develop general-purpose languages, instead focusing even more on domain-specific languages. The identification of common elements in these studies reveals characteristics that enable the design and implementation of general-purpose naturalistic languages, which requires the establishment of a programming model. This article presents a conceptual model which describes the elements required for designing general-purpose programming languages and which integrates abstraction, temporal elements and indirect references into its grammar. Moreover, as its grammar resembles natural language, thus reducing the gap between problem and solution domains, a naturalistic language prototype is presented, as are three test scenarios which demonstrate its characteristics.

Author(s):  
Liliana María Favre

MDA requires the ability to understand different languages such as general purpose languages, domain specific languages, modeling languages or programming languages. An underlying principle of MDA for integrating semantically in a unified and interoperable way such languages is using metamodeling techniques.


Author(s):  
Didier Verna

Out of a concern for focus and concision, domain-specific languages (DSLs) are usually very different from general purpose programming languages (GPLs), both at the syntactic and the semantic levels. One approach to DSL implementation is to write a full language infrastructure, including parser, interpreter, or even compiler. Another approach however, is to ground the DSL into an extensible GPL, giving you control over its own syntax and semantics. The DSL may then be designed merely as an extension to the original GPL, and its implementation may boil down to expressing only the differences with it. The task of DSL implementation is hence considerably eased. The purpose of this chapter is to provide a tour of the features that make a GPL extensible, and to demonstrate how, in this context, the distinction between DSL and GPL can blur, sometimes to the point of complete disappearance.


2018 ◽  
Vol 12 (02) ◽  
pp. 237-260
Author(s):  
Weifeng Xu ◽  
Dianxiang Xu ◽  
Abdulrahman Alatawi ◽  
Omar El Ariss ◽  
Yunkai Liu

Unigram is a fundamental element of [Formula: see text]-gram in natural language processing. However, unigrams collected from a natural language corpus are unsuitable for solving problems in the domain of computer programming languages. In this paper, we analyze the properties of unigrams collected from an ultra-large source code repository. Specifically, we have collected 1.01 billion unigrams from 0.7 million open source projects hosted at GitHub.com. By analyzing these unigrams, we have discovered statistical properties regarding (1) how developers name variables, methods, and classes, and (2) how developers choose abbreviations. We describe a probabilistic model which relies on these properties for solving a well-known problem in source code analysis: how to expand a given abbreviation to its original indented word. Our empirical study shows that using the unigrams extracted from source code repository outperforms the using of the natural language corpus by 21% when solving the domain specific problems.


Author(s):  
Karan Aggarwal ◽  
Mohammad Salameh ◽  
Abram Hindle

In this paper, we have tried to use statistical machine translation in order to convert Python 2 code to Python 3 code. We use data from two projects and achieve a high BLEU score. We also investigate the cross-project training and testing to analyze the errors so as to ascertain differences with previous case. We have described a pilot study on modeling programming languages as natural language to build translation models on the lines of natural languages. This can be further worked on to translate between versions of a programming language or cross-programming-languages code translation.


1990 ◽  
Vol 4 (2) ◽  
pp. 119-129 ◽  
Author(s):  
Robert R. McCrae

The five‐factor model of personality has repeatedly emerged from lexical studies of natural languages. When adjective‐based factor scales are correlated with other personality measures, the adequacy and comprehensiveness of the five‐factor model are demonstrated at a broad level. However, English language adjectives do not necessarily capture more subtle distinctions within the five factors. In particular, of several facets of the Openness factor, only Openness to Ideas and Values are well represented in single terms. Openness to Fantasy, Aesthetics, Feelings, and Actions can be expressed in phrases, sentences, and literary passages—as excerpts from Bunin's ‘Lika’ illustrate— but not in single words. To maintain its relevance to personality psychology, the study of personality language must continue to examine empirical links to other personality systems and must move beyond the dictionary to analyses of natural language speech and writing.


2014 ◽  
Vol 24 (4) ◽  
pp. 434-473 ◽  
Author(s):  
NEIL SCULTHORPE ◽  
NICOLAS FRISBY ◽  
ANDY GILL

AbstractWhen writing transformation systems, a significant amount of engineering effort goes into setting up the infrastructure needed to direct individual transformations to specific targets in the data being transformed. Strategic programming languages provide general-purpose infrastructure for this task, which the author of a transformation system can use for any algebraic data structure. The Kansas University Rewrite Engine (KURE) is a typed strategic programming language, implemented as a Haskell-embedded domain-specific language. KURE is designed to support typed transformations over typed data, and the main challenge is how to make such transformations compatible with generic traversal strategies that should operate over any type. Strategic programming in a typed setting has much in common with datatype-generic programming. Compared to other approaches to datatype-generic programming, the distinguishing feature of KURE's solution is that the user can configure the behaviour of traversals based on the location of each datum in the tree, beyond their behaviour being determined by the type of each datum. This article describes KURE's approach to assigning types to generic traversals, and the implementation of that approach. We also compare KURE, its design choices, and their consequences, with other approaches to strategic and datatype-generic programming.


Author(s):  
Richard Schumi ◽  
Jun Sun

AbstractCompilers are error-prone due to their high complexity. They are relevant for not only general purpose programming languages, but also for many domain specific languages. Bugs in compilers can potentially render all programs at risk. It is thus crucial that compilers are systematically tested, if not verified. Recently, a number of efforts have been made to formalise and standardise programming language semantics, which can be applied to verify the correctness of the respective compilers. In this work, we present a novel specification-based testing method named SpecTest to better utilise these semantics for testing. By applying an executable semantics as test oracle, SpecTest can discover deep semantic errors in compilers. Compared to existing approaches, SpecTest is built upon a novel test coverage criterion called semantic coverage which brings together mutation testing and fuzzing to specifically target less tested language features. We apply SpecTest to systematically test two compilers, i.e., the Java compiler and the Solidity compiler. SpecTest improves the semantic coverage of both compilers considerably and reveals multiple previously unknown bugs.


2004 ◽  
Vol 11 (34) ◽  
Author(s):  
Anders Møller ◽  
Michael I. Schwartzbach

We survey work on statically type checking XML transformations, covering a wide range of notations and ambitions. The concept of <em>type</em> may vary from idealizations of DTD to full-blown XML Schema or even more expressive formalisms. The notion of <em>transformation</em> may vary from clean and simple transductions to domain-specific languages or integration of XML in general-purpose programming languages. Type annotations can be either explicit or implicit, and type checking ranges from exact decidability to pragmatic approximations.<br /> <br />We characterize and evaluate existing tools in this design space, including a recent result of the authors providing practical type checking of full unannotated XSLT 1.0 stylesheets given general DTDs that describe the input and output languages.


2015 ◽  
Author(s):  
Karan Aggarwal ◽  
Mohammad Salameh ◽  
Abram Hindle

In this paper, we have tried to use statistical machine translation in order to convert Python 2 code to Python 3 code. We use data from two projects and achieve a high BLEU score. We also investigate the cross-project training and testing to analyze the errors so as to ascertain differences with previous case. We have described a pilot study on modeling programming languages as natural language to build translation models on the lines of natural languages. This can be further worked on to translate between versions of a programming language or cross-programming-languages code translation.


2021 ◽  
Vol 11 (17) ◽  
pp. 7823
Author(s):  
Igor Dejanović ◽  
Mirjana Dejanović ◽  
Jovana Vidaković ◽  
Siniša Nikolić

The majority of studies in psychology are nowadays performed using computers. In the past, access to good quality software was limited, but in the last two decades things have changed and today we have an array of good and easily accessible open-source software to choose from. However, experiment builders are either GUI-centric or based on general-purpose programming languages which require programming skills. In this paper, we investigate an approach based on domain-specific languages which enables a text-based experiment development using domain-specific concepts, enabling practitioners with limited or no programming skills to develop psychology tests. To investigate our approach, we created PyFlies, a domain-specific language for designing experiments in psychology, which we present in this paper. The language is tailored for the domain of psychological studies. The aim is to capture the essence of the experiment design in a concise and highly readable textual form. The editor for the language is built as an extension for Visual Studio Code, one of the most popular programming editors today. From the experiment description, various targets can be automatically produced. In this version, we provide a code generator for the PsychoPy library while generators for other target platforms are planned. We discuss the language, its concepts, syntax, some current limitations, and development directions. We investigate the language using a case study of the implementation of the Eriksen flanker task.


Sign in / Sign up

Export Citation Format

Share Document