bounded memory
Recently Published Documents


TOTAL DOCUMENTS

87
(FIVE YEARS 13)

H-INDEX

13
(FIVE YEARS 2)

Author(s):  
Sheng Li

In this paper, we consider the robust investment and reinsurance problem with bounded memory and risk co-shocks under a jump-diffusion risk model. The insurer is assumed to be ambiguity-averse and make the optimal decision under the mean-variance criterion. The insurance market is described by two-dimensional dependent claims while the risky asset is depicted by the jump-diffusion model. By introducing the performance in the past, we derive the wealth process depicted by a stochastic delay differential equation (SDDE). Applying the stochastic control theory under the game-theoretic framework, together with stochastic control theory with delay, the robust equilibrium investment-reinsurance strategy and the corresponding robust equilibrium value function are derived. Furthermore, some numerical examples are provided to illustrate the effect of market parameters on the optimal investment and reinsurance strategy.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-28
Author(s):  
Eric Atkinson ◽  
Guillaume Baudart ◽  
Louis Mandel ◽  
Charles Yuan ◽  
Michael Carbin

Probabilistic programming languages aid developers performing Bayesian inference. These languages provide programming constructs and tools for probabilistic modeling and automated inference. Prior work introduced a probabilistic programming language, ProbZelus, to extend probabilistic programming functionality to unbounded streams of data. This work demonstrated that the delayed sampling inference algorithm could be extended to work in a streaming context. ProbZelus showed that while delayed sampling could be effectively deployed on some programs, depending on the probabilistic model under consideration, delayed sampling is not guaranteed to use a bounded amount of memory over the course of the execution of the program. In this paper, we the present conditions on a probabilistic program’s execution under which delayed sampling will execute in bounded memory. The two conditions are dataflow properties of the core operations of delayed sampling: the m -consumed property and the unseparated paths property . A program executes in bounded memory under delayed sampling if, and only if, it satisfies the m -consumed and unseparated paths properties. We propose a static analysis that abstracts over these properties to soundly ensure that any program that passes the analysis satisfies these properties, and thus executes in bounded memory under delayed sampling.


2021 ◽  
Vol 346 ◽  
pp. 52-66
Author(s):  
Dhananjay Raju ◽  
Rüdiger Ehlers ◽  
Ufuk Topcu
Keyword(s):  

Author(s):  
Yanfei Bai ◽  
Zhongbao Zhou ◽  
Helu Xiao ◽  
Rui Gao ◽  
Feimin Zhong

Author(s):  
Olivier Beaumont ◽  
Julien Herrmann ◽  
Guillaume Pallez (Aupy) ◽  
Alena Shilova

Deep learning training memory needs can prevent the user from considering large models and large batch sizes. In this work, we propose to use techniques from memory-aware scheduling and automatic differentiation (AD) to execute a backpropagation graph with a bounded memory requirement at the cost of extra recomputations. The case of a single homogeneous chain, i.e. the case of a network whose stages are all identical and form a chain, is well understood and optimal solutions have been proposed in the AD literature. The networks encountered in practice in the context of deep learning are much more diverse, both in terms of shape and heterogeneity. In this work, we define the class of backpropagation graphs, and extend those on which one can compute in polynomial time a solution that minimizes the total number of recomputations. In particular, we consider join graphs which correspond to models such as siamese or cross-modal networks. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.


2020 ◽  
Author(s):  
Shubham Toshniwal ◽  
Sam Wiseman ◽  
Allyson Ettinger ◽  
Karen Livescu ◽  
Kevin Gimpel

2020 ◽  
Vol 245 ◽  
pp. 01041
Author(s):  
Alexander Adler ◽  
Udo Kebschull

Monitoring is an indispensable tool for the operation of any large installation of grid or cluster computing, be it high energy physics or elsewhere. Usually, monitoring is configured to collect a small amount of data, just enough to enable detection of abnormal conditions. Once detected, the abnormal condition is handled by gathering all information from the affected components. This data is processed by querying it in a manner similar to a database. This contribution shows how the metaphor of a debugger (for software applications) can be transferred to a compute cluster. The concepts of variables, assertions and breakpoints that are used in debugging can be applied to monitoring by defining variables as the quantities recorded by monitoring and breakpoints as invariants formulated via these variables. It is found that embedding fragments of a data extracting and reporting tool such as the UNIX tool awk facilitates concise notations for commonly used variables since tools like awk are designed to process large event streams (in textual representations) with bounded memory. A functional notation similar to both the pipe notation used in the UNIX shell and the point-free style used in functional programming simplify the combination of variables that commonly occur when formulating breakpoints.


2019 ◽  
Author(s):  
Heeseung Lee ◽  
Hyang-Jung Lee ◽  
Kyoung Whan Choe ◽  
Sang-Hun Lee

AbstractClassification, one of the key ingredients for human cognition, entails establishing a criterion that splits a given feature space into mutually exclusive subspaces. In classification tasks performed in daily life, however, a criterion is often not provided explicitly but instead needs to be guessed from past samples of a feature space. For example, we judge today’s temperature to be “cold” or “warm” by implicitly comparing it against a “typical” seasonal temperature. In such situations, establishing an optimal criterion is challenging for cognitive agents with bounded memory because it requires retrieving an entire set of past episodes with precision. As a computational account for how humans carry out this challenging operation, we developed a normative Bayesian model of classification (NBMC), in which Bayesian agents, whose working-memory precision decays as episodes elapse, continuously update their criterion as they perform a binary perceptual classification task on sequentially presented stimuli. We drew a set of specific implications regarding key properties of classification from the NBMC, and demonstrated the correspondence between the NBMC and human observers in classification behavior for each of those implications. Furthermore, in the functional magnetic resonance imaging responses acquired concurrently with behavioral data, we identified an ensemble of brain activities that coherently represent the latent variables, including the inferred criterion, of the NBMC. Given these results, we believe that the NBMC is worth being considered as a useful computational model that guides behavioral and neural studies on perceptual classification, especially for agents with bounded memory representation of past sensory events.Significant StatementAlthough classification—assigning events into mutually exclusive classes—requires a criterion, people often have to perform various classification tasks without explicit criteria. In such situations, forming a criterion based on past experience is quite challenging because people’s memory of past events deteriorates quickly over time. Here, we provided a computational model for how a memory-bounded yet normative agent infers the criterion from past episodes to maximally perform a binary perceptual classification task. This model successfully captured several key properties of human classification behavior, and the neural signals representing its latent variables were identified in the classifying human brains. By offering a rational account for memory-bonded agents’ classification, our model can guide future behavioral and neural studies on perceptual classification.


2019 ◽  
Vol 115 ◽  
pp. 131-145 ◽  
Author(s):  
Gilad Bavly ◽  
Ron Peretz

Sign in / Sign up

Export Citation Format

Share Document