lazy evaluation
Recently Published Documents


TOTAL DOCUMENTS

129
(FIVE YEARS 16)

H-INDEX

15
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Yuxuan Jing ◽  
Rami M. Younis

Abstract Automatic differentiation software libraries augment arithmetic operations with their derivatives, thereby relieving the programmer of deriving, implementing, debugging, and maintaining derivative code. With this encapsulation however, the responsibility of code optimization relies more heavily on the AD system itself (as opposed to the programmer and the compiler). Moreover, given that there are multiple contexts in reservoir simulation software for which derivatives are required (e.g. property package and discrete operator evaluations), the AD infrastructure must also be adaptable. An Operator Overloading AD design is proposed and tested to provide scalability and computational efficiency seemlessly across memory- and compute-bound applications. This is achieved by 1) use of portable and standard programming language constructs (C++17 and OpenMP 4.5 standards), 2) adopting a vectorized programming interface, 3) lazy evaluation via expression templates, and 4) multiple memory alignment and layout policies. Empirical analysis is conducted on various kernels spanning various arithmetic intensity and working set sizes. Cache- aware roofline analysis results show that the performance and scalability attained are reliably ideal. In terms of floapting point operations executed per second, the performance of the AD system matches optimized hand-code. Finally, the implementation is benchmarked using the Automatically Differentiable Expression Templates Library (ADETL).


2021 ◽  
Vol 5 (ICFP) ◽  
pp. 1-28
Author(s):  
Yao Li ◽  
Li-yao Xia ◽  
Stephanie Weirich

Lazy evaluation is a powerful tool for functional programmers. It enables the concise expression of on-demand computation and a form of compositionality not available under other evaluation strategies. However, the stateful nature of lazy evaluation makes it hard to analyze a program's computational cost, either informally or formally. In this work, we present a novel and simple framework for formally reasoning about lazy computation costs based on a recent model of lazy evaluation: clairvoyant call-by-value. The key feature of our framework is its simplicity, as expressed by our definition of the clairvoyance monad. This monad is both simple to define (around 20 lines of Coq) and simple to reason about. We show that this monad can be effectively used to mechanically reason about the computational cost of lazy functional programs written in Coq.


Author(s):  
Souvik Bhattacherjee ◽  
Gang Liao ◽  
Michael Hicks ◽  
Daniel J. Abadi

2021 ◽  
Vol 11 (1) ◽  
pp. 18-37
Author(s):  
Mehmet Bicer ◽  
Daniel Indictor ◽  
Ryan Yang ◽  
Xiaowen Zhang

Association rule mining is a common technique used in discovering interesting frequent patterns in data acquired in various application domains. The search space combinatorically explodes as the size of the data increases. Furthermore, the introduction of new data can invalidate old frequent patterns and introduce new ones. Hence, while finding the association rules efficiently is an important problem, maintaining and updating them is also crucial. Several algorithms have been introduced to find the association rules efficiently. One of them is Apriori. There are also algorithms written to update or maintain the existing association rules. Update with early pruning (UWEP) is one such algorithm. In this paper, the authors propose that in certain conditions it is preferable to use an incremental algorithm as opposed to the classic Apriori algorithm. They also propose new implementation techniques and improvements to the original UWEP paper in an algorithm we call UWEP2. These include the use of memorization and lazy evaluation to reduce scans of the dataset.


2021 ◽  
Vol 251 ◽  
pp. 03039
Author(s):  
A. Augusto Alves ◽  
Anton Poctarev ◽  
Ralf Ulrich

This document is devoted to the description of advances in the generation of high-quality random numbers for CORSIKA 8, which is being developed in modern C++17 and is designed to run on modern multi-thread processors and accelerators. CORSIKA 8 is a Monte Carlo simulation framework to model ultra-high energy secondary particle cascades in astroparticle physics. The aspects associated with the generation of high-quality random numbers on massively parallel platforms, like multi-core CPUs and GPUs, are reviewed and the deployment of counter-based engines using an innovative and multi-thread friendly API are described. The API is based on iterators providing a very well known access mechanism in C++, and also supports lazy evaluation. Moreover, an upgraded version of the Squares algorithm with highly efficient internal 128 as well as 256 bit counters is presented in this context. Performance measurements are provided, as well as comparisons with conventional designs are given. Finally, the integration into CORSIKA 8 is commented.


2021 ◽  
Vol 28 ◽  
pp. 703-707
Author(s):  
Hang Lv ◽  
Daniel Povey ◽  
Mahsa Yarmohammadi ◽  
Ke Li ◽  
Yiming Wang ◽  
...  
Keyword(s):  

2019 ◽  
Author(s):  
Debajyoti Ray ◽  
Daniel Golovin ◽  
Andreas Krause ◽  
Colin Camerer

Economic surveys and experiments usually present fixed questions to respondents. Rapid computation now allows adaptively optimized questions, based on previous responses, to maximize expected information. We describe a novel method of this type introduced in computer science, and apply it experimentally to six theories of risky choice. The EC2 method creates equivalence classes, each consisting of a true theory and its noisy-response perturbations, and chooses questions with the goal of distinguishing between equivalence classes by cutting edges connecting them. The edge-cutting information measure is adaptively submodular, which enables a provable performance bound and “lazy” evaluation which saves computation. The experimental data show that most subjects, making only 30 choices, can be reliably classified as choosing according to EV or two variants of prospect theory. We also show that it is difficult for subjects to manipulate by misreporting preferences, and find no evidence of manipulation.


Sign in / Sign up

Export Citation Format

Share Document