Empirical, Human-Centered Evaluation of Programming and Programming Language Constructs: Controlled Experiments

Author(s):  
Stefan Hanenberg
Cybernetics ◽  
1982 ◽  
Vol 17 (5) ◽  
pp. 590-595
Author(s):  
V. N. Domrachev ◽  
Yu. V. Kapitonova ◽  
L. G. Samoilenko

Cybernetics ◽  
1984 ◽  
Vol 19 (3) ◽  
pp. 325-333
Author(s):  
V. N. Domrachev ◽  
Yu. V. Kapitonova ◽  
L. G. Samoilenko

2016 ◽  
Vol 27 (7) ◽  
pp. 1111-1131
Author(s):  
JEAN-BAPTISTE JEANNIN ◽  
DEXTER KOZEN ◽  
ALEXANDRA SILVA

Theoretical models of recursion schemes have been well studied under the names well-founded coalgebras, recursive coalgebras, corecursive algebras and Elgot algebras. Much of this work focuses on conditions ensuring unique or canonical solutions, e.g. when the coalgebra is well founded.If the coalgebra is not well founded, then there can be multiple solutions. The standard semantics of recursive programs gives a particular solution, typically the least fixpoint of a certain monotone map on a domain whose least element is the totally undefined function; but this solution may not be the desired one. We have recently proposed programming language constructs to allow the specification of alternative solutions and methods to compute them. We have implemented these new constructs as an extension of OCaml.In this paper, we prove some theoretical results characterizing well-founded coalgebras, along with several examples for which this extension is useful. We also give several examples that are not well founded but still have a desired solution. In each case, the function would diverge under the standard semantics of recursion, but can be specified and computed with the programming language constructs we have proposed.


2021 ◽  
Vol 40 (2) ◽  
pp. 51-54
Author(s):  
Kyle Chard ◽  
James Muns ◽  
Richard Wai ◽  
S. Tucker Taft

Language constructs that support parallel computing are relatively well recognized at this point, with features such as parallel loops (optionally with reduction operators), divide-and-conquer parallelism, and general parallel blocks. But what language features would make distributed computing safer and more productive? Is it helpful to be able to specify on what node a computation should take place, and on what node data should reside, or is that overspecification? We don't normally expect a user of a parallel programming language to specify what core is used for a given iteration of a loop, nor which data should be moved into which core's cache. Generally the compiler and the run-time manage the allocation of cores, and the hardware worries about the cache. But in a distributed world, communication costs can easily outweigh computation costs in a poorly designed application. This panel will discuss various language features, some of which already exist to support parallel computing, and how they could be enhanced or generalized to support distributed computing safely and efficiently.


Sign in / Sign up

Export Citation Format

Share Document