scholarly journals A modular module system

2000 ◽  
Vol 10 (3) ◽  
pp. 269-303 ◽  
Author(s):  
XAVIER LEROY

A simple implementation of an SML-like module system is presented as a module parameterized by a base language and its type-checker. This implementation is useful both as a detailed tutorial on the Harper–Lillibridge–Leroy module system and its implementation, and as a constructive demonstration of the applicability of that module system to a wide range of programming languages.

Author(s):  
Norihiro Yamada ◽  
Samson Abramsky

Abstract The present work achieves a mathematical, in particular syntax-independent, formulation of dynamics and intensionality of computation in terms of games and strategies. Specifically, we give game semantics of a higher-order programming language that distinguishes programmes with the same value yet different algorithms (or intensionality) and the hiding operation on strategies that precisely corresponds to the (small-step) operational semantics (or dynamics) of the language. Categorically, our games and strategies give rise to a cartesian closed bicategory, and our game semantics forms an instance of a bicategorical generalisation of the standard interpretation of functional programming languages in cartesian closed categories. This work is intended to be a step towards a mathematical foundation of intensional and dynamic aspects of logic and computation; it should be applicable to a wide range of logics and computations.


2022 ◽  
Author(s):  
Md Mahbub Alam ◽  
Luis Torgo ◽  
Albert Bifet

Due to the surge of spatio-temporal data volume, the popularity of location-based services and applications, and the importance of extracted knowledge from spatio-temporal data to solve a wide range of real-world problems, a plethora of research and development work has been done in the area of spatial and spatio-temporal data analytics in the past decade. The main goal of existing works was to develop algorithms and technologies to capture, store, manage, analyze, and visualize spatial or spatio-temporal data. The researchers have contributed either by adding spatio-temporal support with existing systems, by developing a new system from scratch, or by implementing algorithms for processing spatio-temporal data. The existing ecosystem of spatial and spatio-temporal data analytics systems can be categorized into three groups, (1) spatial databases (SQL and NoSQL), (2) big spatial data processing infrastructures, and (3) programming languages and GIS software. Since existing surveys mostly investigated infrastructures for processing big spatial data, this survey has explored the whole ecosystem of spatial and spatio-temporal analytics. This survey also portrays the importance and future of spatial and spatio-temporal data analytics.


2020 ◽  
Vol 23 (5) ◽  
pp. 895-911 ◽  
Author(s):  
Michael Burch ◽  
Elisabeth Melby

Abstract The growing number of students can be a challenge for teaching visualization lectures, supervision, evaluation, and grading. Moreover, designing visualization courses by matching the different experiences and skills of the students is a major goal in order to find a common solvable task for all of them. Particularly, the given task is important to follow a common project goal, to collaborate in small project groups, but also to further experience, learn, or extend programming skills. In this article, we survey our experiences from teaching 116 student project groups of 6 bachelor courses on information visualization with varying topics. Moreover, two teaching strategies were tried: 2 courses were held without lectures and assignments but with weekly scrum sessions (further denoted by TS1) and 4 courses were guided by weekly lectures and assignments (further denoted by TS2). A total number of 687 students took part in all of these 6 courses. Managing the ever growing number of students in computer and data science is a big challenge in these days, i.e., the students typically apply a design-based active learning scenario while being supported by weekly lectures, assignments, or scrum sessions. As a major outcome, we identified a regular supervision either by lectures and assignments or by regular scrum sessions as important due to the fact that the students were relatively unexperienced bachelor students with a wide range of programming skills, but nearly no visualization background. In this article, we explain different subsequent stages to successfully handle the upcoming problems and describe how much supervision was involved in the development of the visualization project. The project task description is given in a way that it has a minimal number of requirements but can be extended in many directions while most of the decisions are up to the students like programming languages, visualization approaches, or interaction techniques. Finally, we discuss the benefits and drawbacks of both teaching strategies. Graphic abstract


Author(s):  
Nikolina Stanić Loknar ◽  
◽  
Diana Bratić ◽  
Ana Agić ◽  
◽  
...  

Kinetic typography - text in motion is an animation method of characters that has a video form instead of some "static" form such as picture, poster or book. The most important element for figuration of kinetic typography is the choice of font. Furthermore, one should think about the letter cut, the size and color of the characters, and the background color on which the animation takes place. It can be created in various ways, most often using software that applies a multitude of effects to the text or letter character, creating dynamic solutions. The effects vary from the simplest such as "fade-in" and "fade-out" (entering and exiting text in and out of the frame). Static characters can expand, narrow, move slowly or rapidly, grow and change in a variety of ways to very complex ones in which the author builds an entire story or promotional video by carefully combining software capabilities. However, each software has its limitations and for this reason the kinetic typography presented in this paper is programmed using codes. In a wide range of available programming languages due to the simple interface that does not require advanced programming concepts and gives exceptional results in the field of kinetic typography, Processing was chosen. The Processing programming language is intended for generating and modifying graphics and is based on the Java programming language. The most important difference between Processing and Java is that Processing offers a simple programming interface that does not require advanced levels of programming such as classes, objects, or animations. It also allows advanced users to use them. Processing uses a variety of typography rendering approaches such as raster and vector solutions and allows typography to be programmed and displayed on the Web independently of the user's Web browser and font database. Processing enables the use of visual elements in animation, including typographic ones, by introducing interaction to the user. The user is no longer a passive observer but actively participates in the performance of the application whose final appearance is not predefined but arises from the actions of each individual user. For the purposes of this paper, individual letters were created in a font-making program. The letters made are of various written classifications and cuts, which with their variety contribute to the attractiveness of the animation. In the creating of motion typography in this paper, the programming language Processing was used. Written program codes that manipulate words, letters, or parts of characters to create interesting visual effects for the viewer that aim to hold the viewer's attention and convey the desired message or emotion. There are no strict rules and patterns when making kinetic typography. In kinetic typography, each author determines his own rules, method of production, and there are no same solutions.


Author(s):  
Cayetano Guerra Artal ◽  
Maria Dolores Afonso Suarez ◽  
Idafen Santana Perez ◽  
Ruben Quesada Lopez

The advance of Internet towards Web 2.0 conveys the potential it has in a wide range of scopes. The ongoing progress of the Web technology and its availability in teaching and learning, as well as a students’ profile increasingly more used to managing an important amount of digital information, offers lecturers the opportunity and challenge of putting at students’ disposal didactic tools making use of the Internet. Programming is one of the essential areas taught in university studies of Computer Science and other engineering degrees. At present, it is a knowledge acquired through tutorial classes and the practice with different tools for programming. This paper shows the acquired experience in the development and use of a simple compiler accessible through a Web page. In addition it presents a teaching proposal for its use in subjects that include programming languages lessons. OLC - On-Line Compiler - is an application which greatly lightens the student’s workload at the initial stage of programming. During this initial period they will neither have to deal with the complexities of the installation and the configuration of these types of tools, nor with the understanding of multiple options which they present. Therefore students can concentrate on the comprehension of the programming structures and the programming language to be studied.


2019 ◽  
Vol 29 (8) ◽  
pp. 1125-1150
Author(s):  
FERRUCCIO GUIDI ◽  
CLAUDIO SACERDOTI COEN ◽  
ENRICO TASSI

In this paper, we are interested in high-level programming languages to implement the core components of an interactive theorem prover for a dependently typed language: the kernel – responsible for type-checking closed terms – and the elaborator – that manipulates open terms, that is terms containing unresolved unification variables.In this paper, we confirm that λProlog, the language developed by Miller and Nadathur since the 80s, is extremely suitable for implementing the kernel. Indeed, we easily obtain a type checker for the Calculus of Inductive Constructions (CIC). Even more, we do so in an incremental way by escalating a checker for a pure type system to the full CIC.We then turn our attention to the elaborator with the objective to obtain a simple implementation thanks to the features of the programming language. In particular, we want to use λProlog’s unification variables to model the object language ones. In this way, scope checking, carrying of assignments and occur checking are handled by the programming language.We observe that the eager generative semantics inherited from Prolog clashes with this plan. We propose an extension to λProlog that allows to control the generative semantics, suspend goals over flexible terms turning them into constraints, and finally manipulate these constraints at the meta-meta level via constraint handling rules.We implement the proposed language extension in the Embedded Lambda Prolog Interpreter system and we discuss how it can be used to extend the kernel into an elaborator for CIC.


Author(s):  
Andrey Stolyarov ◽  

The book is aimed at people who learn programming on their own; it considers a wide range of issues, including introductory information, basic concepts and techniques of programming, the capabilities of the operating system kernel and the principles of its functioning, programming paradigms. It is supposed to use operating systems of the Unix family (including Linux) as an end-to-end working and training environment; a number of programming languages are considered: Pascal, assembly language (NASM), C, C++, Lisp, Scheme, Prolog, Hope and Tcl. The book includes information about the most important Unix system calls, including those for communication over computer networks; an introducton to the ncurses, FLTK and Tcl/Tk libraries is also given. The second volume ("Systems and Networks") starts with the fourth part, devoted to the C programming language; it also includes parts about basic Unix programming (input/output, process manipulation etc.); computer networking; parallel programming and dealing with shared data; basics of kernel internals.


2021 ◽  
Vol 263 (5) ◽  
pp. 1164-1175
Author(s):  
Roberto San Millán-Castillo ◽  
Eduardo Latorre-Iglesias ◽  
Martin Glesser ◽  
Salomé Wanty ◽  
Daniel Jiménez-Caminero ◽  
...  

Sound quality metrics provide an objective assessment of the psychoacoustics of sounds. A wide range of metrics has been already standardised while others remain as active research topics. Calculation algorithms are available in commercial equipment or Matlab scripts. However, they may not present available data on general documentation and validation procedures. Moreover, the use of these tools might be unaffordable for some students and independent researchers. In recent years, the scientific and technical community has been developing uncountable open-source software projects in several knowledge fields. The permission to use, study, modify, improve and distribute open-source software make it extremely valuable. It encourages collaboration and sharing, and thus transparency and continuous improvement of the coding. Modular Sound Quality Integrated Toolbox (MOSQITO) project relies on one of the most popular high-level and free programming languages: Python. The main objective of MOSQITO is to provide a unified and modular framework of key sound quality and psychoacoustics metrics, free and open-source, which supports reproducible testing. Moreover, open-source projects can be efficient learning tools at University degrees. This paper presents the current structure of the toolbox from a technical point of view. Besides, it discusses open-source development contributions to graduates training.


2004 ◽  
Vol 2004 (3) ◽  
pp. 135-160 ◽  
Author(s):  
F. Tchier

Relations and relational operators can be used to define the semantics of programming languages. The operations∨and∘serve to giveangelic semanticsby defining a program to go right when there is a possibility to go right. On the other hand, the demonic operations⊔and□do the opposite: if there is a possibility to go wrong, a program whose semantics is given by these operators will go wrong; it is thedemonic semantics. This type of semantics is known at least since Dijkstra's introduction of the language of guarded commands. Recently, there has been a growing interest in demonic relational semantics of sequential programs. Usually, a construct is given an ad hoc semantic definition based on an intuitive understanding of its behavior. In this note, we show how the notion ofrelational flow diagram(essentially a matrix whose entries are relations on the set of states of the program), introduced by Schmidt, can be used to give a single demonic definition for a wide range of programming constructs. This research had originally been carried out by J. Desharnais and F. Tchier (1996) in the same framework of the binary homogeneous relations. We show that all the results can be generalized by using the monotypes and the residuals introduced by Desharnais et al. (2000).


2005 ◽  
Vol 13 (2) ◽  
pp. 127-135 ◽  
Author(s):  
Ami Marowka

The aim of this paper is to present a qualitative evaluation of three state-of-the-art parallel languages: OpenMP, Unified Parallel C (UPC) and Co-Array Fortran (CAF). OpenMP and UPC are explicit parallel programming languages based on the ANSI standard. CAF is an implicit programming language. On the one hand, OpenMP designs for shared-memory architectures and extends the base-language by using compiler directives that annotate the original source-code. On the other hand, UPC and CAF designs for distribute-shared memory architectures and extends the base-language by new parallel constructs. We deconstruct each language into its basic components, show examples, make a detailed analysis, compare them, and finally draw some conclusions.


Sign in / Sign up

Export Citation Format

Share Document