DENOTATIONAL SEMANTICS OF AN HPF-LIKE DATA-PARALLEL LANGUAGE MODEL

2001 ◽  
Vol 11 (02n03) ◽  
pp. 363-374 ◽  
Author(s):  
MINYI GUO

It is important for programmers to understand the semantics of a programming language. However, little work has been done about the semantic descriptions of HPF-like data-parallel languages. In this paper, we first define a simple language [Formula: see text], which includes the principal facilities of a data-parallel language such as HPF. Then we present a denotational semantic model of [Formula: see text]. It is useful for understanding the components of an HPF-like language, such as data alignment and distribution directives, forall data-parallel statements.

2010 ◽  
Vol 20 (5-6) ◽  
pp. 537-576 ◽  
Author(s):  
MATTHEW FLUET ◽  
MIKE RAINEY ◽  
JOHN REPPY ◽  
ADAM SHAW

AbstractThe increasing availability of commodity multicore processors is making parallel computing ever more widespread. In order to exploit its potential, programmers need languages that make the benefits of parallelism accessible and understandable. Previous parallel languages have traditionally been intended for large-scale scientific computing, and they tend not to be well suited to programming the applications one typically finds on a desktop system. Thus, we need new parallel-language designs that address a broader spectrum of applications. The Manticore project is our effort to address this need. At its core is Parallel ML, a high-level functional language for programming parallel applications on commodity multicore hardware. Parallel ML provides a diverse collection of parallel constructs for different granularities of work. In this paper, we focus on the implicitly threaded parallel constructs of the language, which support fine-grained parallelism. We concentrate on those elements that distinguish our design from related ones, namely, a novel parallel binding form, a nondeterministic parallel case form, and the treatment of exceptions in the presence of data parallelism. These features differentiate the present work from related work on functional data-parallel language designs, which have focused largely on parallel problems with regular structure and the compiler transformations—most notably, flattening—that make such designs feasible. We present detailed examples utilizing various mechanisms of the language and give a formal description of our implementation.


2011 ◽  
Vol 314-316 ◽  
pp. 2152-2157
Author(s):  
Huan Gu

This document explains and demonstrates how to construct JDF data agent tools on .NET Linq platform. This Agent has the ability to create a Job, to add Nodes to an existing Job, and to modify existing Nodes. It is based on the structure of JDF standard and the definition of markup, and packages the node of each layer and its complicated parameters and data type into the object, forming a programming language model that is based on JDF markup object, and reducing the complexity of developing the printing digital process software basing-on JDFXML standard, providing a reference for developing the same distributed digital system basing-on XML driver.


2013 ◽  
Vol 2013 ◽  
pp. 1-5
Author(s):  
Latha S. Warrier

The Abrams-Lloyd quantum algorithm computes an eigenvalue and the corresponding eigenstate of a unitary matrix from an approximate eigenvector Va. The eigenstate is a basis vector in the orthonormal eigenspace. Finding another eigenvalue, using a random approximate eigenvector, may require many trials as the trial may repeatedly result in the eigenvalue measured earlier. We present a method involving orthogonalization of the eigenstate obtained in a trial. It is used as the Va for the next trial. Because of the orthogonal construction, Abrams-Lloyd algorithm will not repeat the eigenvalue measured earlier. Thus, all the eigenvalues are obtained in sequence without repetitions. An operator that anticommutes with a unitary operator orthogonalizes the eigenvectors of the unitary. We implemented the method on the programming language model of quantum computation and tested it on a unitary matrix representing the time evolution operator of a small spin chain. All the eigenvalues of the operator were obtained sequentially. Another use of the first eigenvector from Abrams-Lloyd algorithm is preparing a state that is the uniform superposition of all the eigenvectors. This is possible by nonorthogonalizing the first eigenvector in all dimensions and then applying the Abrams-Lloyd algorithm steps stopping short of the last measurement.


1997 ◽  
Vol 6 (4) ◽  
pp. 345-362 ◽  
Author(s):  
Barbara Chapman ◽  
Matthew Haines ◽  
Piyush Mehrotra ◽  
Hans Zima ◽  
John Van Rosendale

Data parallel languages, such as High Performance Fortran, can be successfully applied to a wide range of numerical applications.However, many advanced scientific and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not fit well into the data parallel paradigm. In this paper we present Opus, a language designed to fill this gap. The central concept of Opus is a mechanism called ShareD Abstractions (SDA). An SDA can be used as a computation server, i.e., a locus of computational activity, or as a data repository for sharing data between asynchronous tasks. SDAs can be internally data parallel, providing support for the integration of data and task parallelism as well as nested task parallelism. They can thus be used to express multidisciplinary applications in a natural and efficient way. In this paper we describe the features of the language through a series of examples and give an overview of the runtime support required to implement these concepts in parallel and distributed environments.


2005 ◽  
Vol 13 (4) ◽  
pp. 277-298 ◽  
Author(s):  
Rob Pike ◽  
Sean Dorward ◽  
Robert Griesemer ◽  
Sean Quinlan

Very large data sets often have a flat but regular structure and span multiple disks and machines. Examples include telephone call records, network logs, and web document repositories. These large data sets are not amenable to study using traditional database techniques, if only because they can be too large to fit in a single relational database. On the other hand, many of the analyses done on them can be expressed using simple, easily distributed computations: filtering, aggregation, extraction of statistics, and so on. We present a system for automating such analyses. A filtering phase, in which a query is expressed using a new procedural programming language, emits data to an aggregation phase. Both phases are distributed over hundreds or even thousands of computers. The results are then collated and saved to a file. The design – including the separation into two phases, the form of the programming language, and the properties of the aggregators – exploits the parallelism inherent in having data and computation distributed across many machines.


Sign in / Sign up

Export Citation Format

Share Document