ACM SIGAda Ada Letters
Latest Publications


TOTAL DOCUMENTS

1659
(FIVE YEARS 42)

H-INDEX

14
(FIVE YEARS 1)

Published By Association For Computing Machinery

1094-3641

2021 ◽  
Vol 40 (2) ◽  
pp. 55-58
Author(s):  
S. Tucker Taft

The OpenMP specification defines a set of compiler directives, library routines, and environment variables that together represent the OpenMP Application Programming Interface, and is currently defined for C, C++, and Fortran. The forthcoming version of Ada, currently dubbed Ada 202X, includes lightweight parallelism features, in particular parallel blocks and parallel loops. All versions of Ada, since its inception in 1983, have included "tasking," which corresponds to what are traditionally considered "heavyweight" parallelism features, or simply "concurrency" features. Ada "tasks" typically map to what are called "kernel threads," in that the operating system manages them and schedules them. However, one of the goals of lightweight parallelism is to reduce overhead by doing more of the management outside the kernel of the operating system, using a light-weight-thread (LWT) scheduler. The OpenMP library routines support both levels of threading, but for Ada 202X, the main interest is in making use of OpenMP for its lightweight thread scheduling capabilities.


2021 ◽  
Vol 40 (2) ◽  
pp. 65-69
Author(s):  
Richard Wai

Modern day cloud native applications have become broadly representative of distributed systems in the wild. However, unlike traditional distributed system models with conceptually static designs, cloud-native systems emphasize dynamic scaling and on-line iteration (CI/CD). Cloud-native systems tend to be architected around a networked collection of distinct programs ("microservices") that can be added, removed, and updated in real-time. Typically, distinct containerized programs constitute individual microservices that then communicate among the larger distributed application through heavy-weight protocols. Common communication stacks exchange JSON or XML objects over HTTP, via TCP/TLS, and incur significant overhead, particularly when using small size message sizes. Additionally, interpreted/JIT/VM-based languages such as Javascript (NodeJS/Deno), Java, and Python are dominant in modern microservice programs. These language technologies, along with the high-overhead messaging, can impose superlinear cost increases (hardware demands) on scale-out, particularly towards hyperscale and/or with latency-sensitive workloads.


2021 ◽  
Vol 40 (2) ◽  
pp. 59-64
Author(s):  
Jan Verschelde

Hardware double precision is often insufficient to solve large scientific problems accurately. Computing in higher precision defined by software causes significant computational overhead. The application of parallel algorithms compensates for this overhead. Newton's method to develop power series expansions of algebraic space curves is the use case for this application.


2021 ◽  
Vol 40 (2) ◽  
pp. 48-50
Author(s):  
Michael Klemm ◽  
Eduardo Quiñones ◽  
Tucker Taft ◽  
Dirk Ziegenbein ◽  
Sara Royuela

OpenMP is traditionally focused on boosting performance in HPC systems. However, other domains are showing an increasing interest in the use of OpenMP by virtue of key aspects introduced in recent versions of the specification: the tasking model, the accelerator model, and other features like the requires and the assumes directives, which allow defining certain contracts. One example is the safety-critical embedded domain, where several efforts have been initiated towards the adoption of OpenMP. However, the OpenMP specification states that "application developers are responsible for correctly using the OpenMP API to produce a conforming program", being not acceptable in high integrity systems, where aspects such as reliability and resiliency have to be ensured at different levels of criticality. In this scope, programming languages like Ada propose a different paradigm by exposing fewer features to the user, and leaving the responsibility of safely exploiting the full underlying architecture to the compiler and the runtime systems, instead. The philosophy behind this kind of model is to move the responsibility of producing correct parallel programs from users to vendors. In this panel, actors from different domains involved in the use of parallel programming models for the development of high-integrity systems share their thoughts about this topic.


2021 ◽  
Vol 40 (2) ◽  
pp. 70-72
Author(s):  
Brian Kleinke

When the Federal Aviation Administration (FAA) launched the System Wide Information Management (SWIM) initiative, the FAA had the goal of using the same portable, open infrastructure across all participating systems in the National Airspace System (NAS). Around 2008 for SWIM Segment 1, the FAA chose Iona Software's Free/Open Source Software (FOSS) based bundle, which was known and supported under the Fuse brand. The FAA obtained the licenses used by programs, including EnRoute Automation Modernization (ERAM), through Iona, which was later acquired by Progress and RedHat.


2021 ◽  
Vol 40 (2) ◽  
pp. 51-54
Author(s):  
Kyle Chard ◽  
James Muns ◽  
Richard Wai ◽  
S. Tucker Taft

Language constructs that support parallel computing are relatively well recognized at this point, with features such as parallel loops (optionally with reduction operators), divide-and-conquer parallelism, and general parallel blocks. But what language features would make distributed computing safer and more productive? Is it helpful to be able to specify on what node a computation should take place, and on what node data should reside, or is that overspecification? We don't normally expect a user of a parallel programming language to specify what core is used for a given iteration of a loop, nor which data should be moved into which core's cache. Generally the compiler and the run-time manage the allocation of cores, and the hardware worries about the cache. But in a distributed world, communication costs can easily outweigh computation costs in a poorly designed application. This panel will discuss various language features, some of which already exist to support parallel computing, and how they could be enhanced or generalized to support distributed computing safely and efficiently.


2021 ◽  
Vol 40 (2) ◽  
pp. 96-102
Author(s):  
Luis Miguel Pinho ◽  
Sara Royuela ◽  
Eduardo Quiñones

The current proposal for the next revision of the Ada language considers the possibility to map the language parallel features to an underlying OpenMP runtime. As previously presented, and discussed in previous workshops, the works on fine-grain parallelism in Ada map well to the OpenMP tasking model for parallelism. Nevertheless, and although the general model of integration, and the semantic constructs are already reflected in the proposed revision of the standard, the integration of these new features with the Real-Time Systems Annex of Ada is still not complete. This paper presents an overview of what is supported and the still open issues.


2021 ◽  
Vol 40 (2) ◽  
pp. 76-91
Author(s):  
Patrick Rogers

An effective approach to learning a new programming language is to implement data structures common to computer programming. The approach is effective because the problem to be solved is well understood, allowing one to focus on the language details. Moreover, several different forms of a given data structure are often possible: bounded versus unbounded, sequential versus thread-safe, and so on. These multiple forms likely require a wide range of language features.


2021 ◽  
Vol 40 (2) ◽  
pp. 92-95
Author(s):  
Jorge Garrido ◽  
David Pisonero ◽  
Juan Zamorano ◽  
Juan A. de la Puente

The paper analyses the support for vectorization that can be found in some programming languages, and the ways it could also be used in Ada. A proposal for an Ada extension for enhanced vectorization support is included.


2021 ◽  
Vol 40 (2) ◽  
pp. 73-75
Author(s):  
Kyle Chard ◽  
Yadu Babuji ◽  
Anna Woodard ◽  
Ben Clifford ◽  
Zhuozhao Li ◽  
...  

Parsl is a parallel programming library for Python that aims to make it easy to specify parallelism in programs and to realize that parallelism on arbitrary parallel and distributed computing systems. Parsl relies on developers annotating Python functions-wrapping either Python or external applications-to indicate that these functions may be executed concurrently. Developers can then link together functions via the exchange of data. Parsl establishes a dynamic dependency graph and sends tasks for execution on connected resources when dependencies are resolved. Parsl's runtime system enables different compute resources to be used, from laptops to supercomputers, without modification to the Parsl program.


Sign in / Sign up

Export Citation Format

Share Document