scholarly journals Desain Mesin Compiler untuk Penganalisa Leksikal, Sintaksis, Semantik, Kode Antara dan Error Handling Pada Bahasa Pemrograman Sederhana

Author(s):  
Fauziah Fauziah ◽  
Andez Apriansyah ◽  
Tri Ichsan Saputra ◽  
Yunan Fauzi Wijaya

In compilation techniques, the processes and stages carried out relate to translating source languages into target languages (object programs). Source languages are high-level programming languages that are easy to understand and easy to learn by humans, while target languages are low-level languages that are only understood by machines. In this study a compiler machine called Automatic LESSIMIC Analyzer is used which can be used to analyze, including lexical, syntactic, and semantic analysis. Compiler machines that are designed can also synthesize intermediate code, using assembler codes. The compiler engine will produce an analysis of the program code that the user enters in the form of an error message, if the program code is not in accordance with the grammar that applies generally in programming languages. In this research, the simple program code that is inputted is C ++ programming language, and successfully analyzes the lexical, semantic, syntactic, intermediate code generation and successfully detects errors from the program code entered with a success rate of 99%.

2012 ◽  
Vol 9 (3) ◽  
pp. 1187-1202
Author(s):  
Zalán Szűgyi ◽  
Márk Török ◽  
Norbert Pataki ◽  
Tamás Kozsik

Nowadays, one of the most important challenges in programming is the efficient usage of multicore processors. All modern programming languages support multicore programming at native or library level. C++11, the next standard of the C++ programming language, also supports multithreading at a low level. In this paper we argue for some extensions of the C++ Standard Template Library based on the features of C++11. These extensions enhance the standard library to be more powerful in the multicore realm. Our approach is based on functors and lambda expressions, which are major extensions in the language. We contribute three case studies: how to efficiently compose functors in pipelines, how to evaluate boolean operators in parallel, and how to efficiently accumulate over associative functors.


Author(s):  
Xiaoqing Wu ◽  
Marjan Mernik ◽  
Barrett R. Bryant ◽  
Jeff Gray

Unlike natural languages, programming languages are strictly stylized entities created to facilitate human communication with computers. In order to make programming languages recognizable by computers, one of the key challenges is to describe and implement language syntax and semantics such that the program can be translated into machine-readable code. This process is normally considered as the front-end of a compiler, which is mainly related to the programming language, but not the target machine. This article will address the most important aspects in building a compiler front-end; that is, syntax and semantic analysis, including related theories, technologies and tools, as well as existing problems and future trends. As the main focus, formal syntax and semantic specifications will be discussed in detail. The article provides the reader with a high-level overview of the language implementation process, as well as some commonly used terms and development practices.


2008 ◽  
Vol 5 (1) ◽  
pp. 41-59 ◽  
Author(s):  
Zarko Zivanov ◽  
Predrag Rakic ◽  
Miroslav Hajdukovic

Today, kiosk automata (kiosks, for short) are used for variety of services from all sort of kiosks for providing information's, to kiosks for paying tickets and ATM's. Kiosks are usually programmed either using high level programming languages, like C++, or using HTML in conjunction with web browser. In this paper, we analyzed a vast range of kiosk automata and derived common characteristics. We present approach for programming kiosk applications based on Domain Specific Language (DSL), designed specifically to meet the needs of developing kiosk applications that are usually programmed using high level programming languages and are deployed on kiosks with touch-screen monitors. Our goal is to make development of such kiosk applications more rapid, while minimizing number of programming errors.


Author(s):  
A. A. Nedbaylov

The calculations required in project activities for engineering students are commonly performed in electronic spreadsheets. Practice has shown that utilizing those calculations could prove to be quite difficult for students of other fields. One of the causes for such situation (as well as partly for problems observed during Java and C programming languages courses) lies in the lack of a streamlined distribution structure for both the source data and the end results. A solution could be found in utilizing a shared approach for information structuring in spreadsheet and software environment, called “the Book Method”, which takes into account the engineering psychology issues regarding the user friendliness of working with electronic information. This method can be applied at different levels in academic institutions and at teacher training courses.


2021 ◽  
Vol 43 (1) ◽  
pp. 1-46
Author(s):  
David Sanan ◽  
Yongwang Zhao ◽  
Shang-Wei Lin ◽  
Liu Yang

To make feasible and scalable the verification of large and complex concurrent systems, it is necessary the use of compositional techniques even at the highest abstraction layers. When focusing on the lowest software abstraction layers, such as the implementation or the machine code, the high level of detail of those layers makes the direct verification of properties very difficult and expensive. It is therefore essential to use techniques allowing to simplify the verification on these layers. One technique to tackle this challenge is top-down verification where by means of simulation properties verified on top layers (representing abstract specifications of a system) are propagated down to the lowest layers (that are an implementation of the top layers). There is no need to say that simulation of concurrent systems implies a greater level of complexity, and having compositional techniques to check simulation between layers is also desirable when seeking for both feasibility and scalability of the refinement verification. In this article, we present CSim 2 a (compositional) rely-guarantee-based framework for the top-down verification of complex concurrent systems in the Isabelle/HOL theorem prover. CSim 2 uses CSimpl, a language with a high degree of expressiveness designed for the specification of concurrent programs. Thanks to its expressibility, CSimpl is able to model many of the features found in real world programming languages like exceptions, assertions, and procedures. CSim 2 provides a framework for the verification of rely-guarantee properties to compositionally reason on CSimpl specifications. Focusing on top-down verification, CSim 2 provides a simulation-based framework for the preservation of CSimpl rely-guarantee properties from specifications to implementations. By using the simulation framework, properties proven on the top layers (abstract specifications) are compositionally propagated down to the lowest layers (source or machine code) in each concurrent component of the system. Finally, we show the usability of CSim 2 by running a case study over two CSimpl specifications of an Arinc-653 communication service. In this case study, we prove a complex property on a specification, and we use CSim 2 to preserve the property on lower abstraction layers.


Semantic Web ◽  
2020 ◽  
pp. 1-16
Author(s):  
Francesco Beretta

This paper addresses the issue of interoperability of data generated by historical research and heritage institutions in order to make them re-usable for new research agendas according to the FAIR principles. After introducing the symogih.org project’s ontology, it proposes a description of the essential aspects of the process of historical knowledge production. It then develops an epistemological and semantic analysis of conceptual data modelling applied to factual historical information, based on the foundational ontologies Constructive Descriptions and Situations and DOLCE, and discusses the reasons for adopting the CIDOC CRM as a core ontology for the field of historical research, but extending it with some relevant, missing high-level classes. Finally, it shows how collaborative data modelling carried out in the ontology management environment OntoME makes it possible to elaborate a communal fine-grained and adaptive ontology of the domain, provided an active research community engages in this process. With this in mind, the Data for history consortium was founded in 2017 and promotes the adoption of a shared conceptualization in the field of historical research.


2014 ◽  
Vol 599-601 ◽  
pp. 1407-1410
Author(s):  
Xu Liang ◽  
Ke Ming Wang ◽  
Gui Yu Xin

Comparing with other High-level programming languages, C Sharp (C#) is more efficient in software development. While MATLAB language provides a series of powerful functions of numerical calculation that facilitate adoption of algorithms, which are widely applied in blind source separation (BSS). Combining the advantages of the two languages, this paper presents an implementation of mixed programming and the development of a simplified blind signal processing system. Application results show the system developed by mixed programming is successful.


2010 ◽  
Vol 19 (01) ◽  
pp. 65-99 ◽  
Author(s):  
MARC POULY

Computing inference from a given knowledgebase is one of the key competences of computer science. Therefore, numerous formalisms and specialized inference routines have been introduced and implemented for this task. Typical examples are Bayesian networks, constraint systems or different kinds of logic. It is known today that these formalisms can be unified under a common algebraic roof called valuation algebra. Based on this system, generic inference algorithms for the processing of arbitrary valuation algebras can be defined. Researchers benefit from this high level of abstraction to address open problems independently of the underlying formalism. It is therefore all the more astonishing that this theory did not find its way into concrete software projects. Indeed, all modern programming languages for example provide generic sorting procedures, but generic inference algorithms are still mythical creatures. NENOK breaks a new ground and offers an extensive library of generic inference tools based on the valuation algebra framework. All methods are implemented as distributed algorithms that process local and remote knowledgebases in a transparent manner. Besides its main purpose as software library, NENOK also provides a sophisticated graphical user interface to inspect the inference process and the involved graphical structures. This can be used for educational purposes but also as a fast prototyping architecture for inference formalisms.


AI ◽  
2021 ◽  
Vol 2 (1) ◽  
pp. 1-16
Author(s):  
Juan Cruz-Benito ◽  
Sanjay Vishwakarma ◽  
Francisco Martin-Fernandez ◽  
Ismael Faro

In recent years, the use of deep learning in language models has gained much attention. Some research projects claim that they can generate text that can be interpreted as human writing, enabling new possibilities in many application areas. Among the different areas related to language processing, one of the most notable in applying this type of modeling is programming languages. For years, the machine learning community has been researching this software engineering area, pursuing goals like applying different approaches to auto-complete, generate, fix, or evaluate code programmed by humans. Considering the increasing popularity of the deep learning-enabled language models approach, we found a lack of empirical papers that compare different deep learning architectures to create and use language models based on programming code. This paper compares different neural network architectures like Average Stochastic Gradient Descent (ASGD) Weight-Dropped LSTMs (AWD-LSTMs), AWD-Quasi-Recurrent Neural Networks (QRNNs), and Transformer while using transfer learning and different forms of tokenization to see how they behave in building language models using a Python dataset for code generation and filling mask tasks. Considering the results, we discuss each approach’s different strengths and weaknesses and what gaps we found to evaluate the language models or to apply them in a real programming context.


Sign in / Sign up

Export Citation Format

Share Document