Implementation of Programming Languages Syntax and Semantics

Author(s):  
Xiaoqing Wu ◽  
Marjan Mernik ◽  
Barrett R. Bryant ◽  
Jeff Gray

Unlike natural languages, programming languages are strictly stylized entities created to facilitate human communication with computers. In order to make programming languages recognizable by computers, one of the key challenges is to describe and implement language syntax and semantics such that the program can be translated into machine-readable code. This process is normally considered as the front-end of a compiler, which is mainly related to the programming language, but not the target machine. This article will address the most important aspects in building a compiler front-end; that is, syntax and semantic analysis, including related theories, technologies and tools, as well as existing problems and future trends. As the main focus, formal syntax and semantic specifications will be discussed in detail. The article provides the reader with a high-level overview of the language implementation process, as well as some commonly used terms and development practices.

Author(s):  
Fauziah Fauziah ◽  
Andez Apriansyah ◽  
Tri Ichsan Saputra ◽  
Yunan Fauzi Wijaya

In compilation techniques, the processes and stages carried out relate to translating source languages into target languages (object programs). Source languages are high-level programming languages that are easy to understand and easy to learn by humans, while target languages are low-level languages that are only understood by machines. In this study a compiler machine called Automatic LESSIMIC Analyzer is used which can be used to analyze, including lexical, syntactic, and semantic analysis. Compiler machines that are designed can also synthesize intermediate code, using assembler codes. The compiler engine will produce an analysis of the program code that the user enters in the form of an error message, if the program code is not in accordance with the grammar that applies generally in programming languages. In this research, the simple program code that is inputted is C ++ programming language, and successfully analyzes the lexical, semantic, syntactic, intermediate code generation and successfully detects errors from the program code entered with a success rate of 99%.


2021 ◽  
Vol 43 (1) ◽  
pp. 1-46
Author(s):  
David Sanan ◽  
Yongwang Zhao ◽  
Shang-Wei Lin ◽  
Liu Yang

To make feasible and scalable the verification of large and complex concurrent systems, it is necessary the use of compositional techniques even at the highest abstraction layers. When focusing on the lowest software abstraction layers, such as the implementation or the machine code, the high level of detail of those layers makes the direct verification of properties very difficult and expensive. It is therefore essential to use techniques allowing to simplify the verification on these layers. One technique to tackle this challenge is top-down verification where by means of simulation properties verified on top layers (representing abstract specifications of a system) are propagated down to the lowest layers (that are an implementation of the top layers). There is no need to say that simulation of concurrent systems implies a greater level of complexity, and having compositional techniques to check simulation between layers is also desirable when seeking for both feasibility and scalability of the refinement verification. In this article, we present CSim 2 a (compositional) rely-guarantee-based framework for the top-down verification of complex concurrent systems in the Isabelle/HOL theorem prover. CSim 2 uses CSimpl, a language with a high degree of expressiveness designed for the specification of concurrent programs. Thanks to its expressibility, CSimpl is able to model many of the features found in real world programming languages like exceptions, assertions, and procedures. CSim 2 provides a framework for the verification of rely-guarantee properties to compositionally reason on CSimpl specifications. Focusing on top-down verification, CSim 2 provides a simulation-based framework for the preservation of CSimpl rely-guarantee properties from specifications to implementations. By using the simulation framework, properties proven on the top layers (abstract specifications) are compositionally propagated down to the lowest layers (source or machine code) in each concurrent component of the system. Finally, we show the usability of CSim 2 by running a case study over two CSimpl specifications of an Arinc-653 communication service. In this case study, we prove a complex property on a specification, and we use CSim 2 to preserve the property on lower abstraction layers.


2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Nancy A. Otieno ◽  
Fauzia A. Malik ◽  
Stacy W. Nganga ◽  
Winnie N. Wairimu ◽  
Dominic O. Ouma ◽  
...  

Abstract Background Maternal immunization is a key strategy for reducing morbidity and mortality associated with infectious diseases in mothers and their newborns. Recent developments in the science and safety of maternal vaccinations have made possible development of new maternal vaccines ready for introduction in low- and middle-income countries. Decisions at the policy level remain the entry point for maternal immunization programs. We describe the policy and decision-making process in Kenya for the introduction of new vaccines, with particular emphasis on maternal vaccines, and identify opportunities to improve vaccine policy formulation and implementation process. Methods We conducted 29 formal interviews with government officials and policy makers, including high-level officials at the Kenya National Immunization Technical Advisory Group, and Ministry of Health officials at national and county levels. All interviews were recorded and transcribed. We analyzed the qualitative data using NVivo 11.0 software. Results All key informants understood the vaccine policy formulation and implementation processes, although national officials appeared more informed compared to county officials. County officials reported feeling left out of policy development. The recent health system decentralization had both positive and negative impacts on the policy process; however, the negative impacts outweighed the positive impacts. Other factors outside vaccine policy environment such as rumours, sociocultural practices, and anti-vaccine campaigns influenced the policy development and implementation process. Conclusions Public policy development process is complex and multifaceted by its nature. As Kenya prepares for introduction of other maternal vaccines, it is important that the identified policy gaps and challenges are addressed.


Semantic Web ◽  
2020 ◽  
pp. 1-16
Author(s):  
Francesco Beretta

This paper addresses the issue of interoperability of data generated by historical research and heritage institutions in order to make them re-usable for new research agendas according to the FAIR principles. After introducing the symogih.org project’s ontology, it proposes a description of the essential aspects of the process of historical knowledge production. It then develops an epistemological and semantic analysis of conceptual data modelling applied to factual historical information, based on the foundational ontologies Constructive Descriptions and Situations and DOLCE, and discusses the reasons for adopting the CIDOC CRM as a core ontology for the field of historical research, but extending it with some relevant, missing high-level classes. Finally, it shows how collaborative data modelling carried out in the ontology management environment OntoME makes it possible to elaborate a communal fine-grained and adaptive ontology of the domain, provided an active research community engages in this process. With this in mind, the Data for history consortium was founded in 2017 and promotes the adoption of a shared conceptualization in the field of historical research.


2014 ◽  
Vol 599-601 ◽  
pp. 1407-1410
Author(s):  
Xu Liang ◽  
Ke Ming Wang ◽  
Gui Yu Xin

Comparing with other High-level programming languages, C Sharp (C#) is more efficient in software development. While MATLAB language provides a series of powerful functions of numerical calculation that facilitate adoption of algorithms, which are widely applied in blind source separation (BSS). Combining the advantages of the two languages, this paper presents an implementation of mixed programming and the development of a simplified blind signal processing system. Application results show the system developed by mixed programming is successful.


2010 ◽  
Vol 19 (01) ◽  
pp. 65-99 ◽  
Author(s):  
MARC POULY

Computing inference from a given knowledgebase is one of the key competences of computer science. Therefore, numerous formalisms and specialized inference routines have been introduced and implemented for this task. Typical examples are Bayesian networks, constraint systems or different kinds of logic. It is known today that these formalisms can be unified under a common algebraic roof called valuation algebra. Based on this system, generic inference algorithms for the processing of arbitrary valuation algebras can be defined. Researchers benefit from this high level of abstraction to address open problems independently of the underlying formalism. It is therefore all the more astonishing that this theory did not find its way into concrete software projects. Indeed, all modern programming languages for example provide generic sorting procedures, but generic inference algorithms are still mythical creatures. NENOK breaks a new ground and offers an extensive library of generic inference tools based on the valuation algebra framework. All methods are implemented as distributed algorithms that process local and remote knowledgebases in a transparent manner. Besides its main purpose as software library, NENOK also provides a sophisticated graphical user interface to inspect the inference process and the involved graphical structures. This can be used for educational purposes but also as a fast prototyping architecture for inference formalisms.


2000 ◽  
Vol 22 (6) ◽  
pp. 199-202 ◽  
Author(s):  
Ifte Mahmud ◽  
David Kim

In an environment where cost, timeliness, and quality drives the business, it is essential to look for answers in technology where these challenges can be met. In the Novartis Pharmaceutical Quality Assurance Department, automation and robotics have become just the tools to meet these challenges. Although automation is a relatively new concept in our department, we have fully embraced it within just a few years. As our company went through a merger, there was a significant reduction in the workforce within the Quality Assurance Department through voluntary and involuntary separations. However the workload remained constant or in some cases actually increased. So even with reduction in laboratory personnel, we were challenged internally and from the headquarters in Basle to improve productivity while maintaining integrity in quality testing. Benchmark studies indicated the Suffern site to be the choice manufacturing site above other facilities. This is attributed to the Suffern facility employees' commitment to reduce cycle time, improve efficiency, and maintain high level of regulatory compliance. One of the stronger contributing factors was automation technology in the laboratoriess, and this technology will continue to help the site's status in the future. The Automation Group was originally formed about 2 years ago to meet the demands of high quality assurance testing throughput needs and to bring our testing group up to standard with the industry. Automation began with only two people in the group and now we have three people who are the next generation automation scientists. Even with such a small staff,we have made great strides in laboratory automation as we have worked extensively with each piece of equipment brought in. The implementation process of each project was often difficult because the second generation automation group came from the laboratory and without much automation experience. However, with the involvement from the users at ‘get-go’, we were able to successfully bring in many automation technologies. Our first experience with automation was SFA/SDAS, and then Zymark TPWII followed by Zymark Multi-dose. The future of product testing lies in automation, and we shall continue to explore the possibilities of improving the testing methodologies so that the chemists will be less burdened with repetitive and mundane daily tasks and be more focused on bringing quality into our products.


Author(s):  
Thanh-Nhan Luong ◽  
Hanh-Phuc Nguyen ◽  
Ninh-Thuan Truong

The software security issue is being paid great attention from the software development community as security violations have emerged variously. Developers often use access control techniques to restrict some security breaches to software systems’ resources. The addition of authorization constraints to the role-based access control model increases the ability to express access rules in real-world problems. However, the complexity of combining components, libraries and programming languages during the implementation stage of web systems’ access control policies may arise potential flaws that make applications’ access control policies inconsistent with their specifications. In this paper, we introduce an approach to review the implementation of these models in web applications written by Java EE according to the MVC architecture under the support of the Spring Security framework. The approach can help developers in detecting flaws in the assignment implementation process of the models. First, the approach focuses on extracting the information about users and roles from the database of the web application. We then analyze policy configuration files to establish the access analysis tree of the application. Next, algorithms are introduced to validate the correctness of the implemented user-role and role-permission assignments in the application system. Lastly, we developed a tool called VeRA, to automatically support the verification process. The tool is also experimented with a number of access violation scenarios in the medical record management system.


2013 ◽  
Vol 670 ◽  
pp. 208-215
Author(s):  
Ying Che ◽  
G. Wang ◽  
M. Lv ◽  
B.Y. Ren

The model transformation from Computation Independent Model (CIM) to Platform Independent Model (PIM) level is one of the crucial and difficult points in the implementation process of model-driven Enterprise Resource Planning (ERP) system. For achieving the semantic conforming transformation between these two abstract modeling levels in Model Driven Architecture (MDA), a model transformation method based on ontology technology was proposed, which was a semi-automatic and general method. Firstly, the existing problems of model transformation from CIM level to PIM level in current studies were analyzed. Then, a model transformation framework based on ontology was built, the basic concepts relating to ontology were defined, and the whole architecture was described. After that, the transformation method was researched from two parts, the discovering of mapping rules and the occurrence of model transformation, which included the discovering technology of mapping rules based on similarity and the working principles of model transformation generator. Finally, a model transformation example was provided for validating the practicability and feasibility of proposed theories.


Sign in / Sign up

Export Citation Format

Share Document