scholarly journals Adapting to the Behavior of Environments with Bounded Memory

2021 ◽  
Vol 346 ◽  
pp. 52-66
Author(s):  
Dhananjay Raju ◽  
Rüdiger Ehlers ◽  
Ufuk Topcu
Keyword(s):  
2002 ◽  
Author(s):  
Tom Oates ◽  
Brent Heeringa
Keyword(s):  

2009 ◽  
Vol 13 (5) ◽  
pp. 625-655 ◽  
Author(s):  
Christophre Georges ◽  
John C. Wallace

In this paper, we explore the consequence of learning to forecast in a very simple environment. Agents have bounded memory and incorrectly believe that there is nonlinear structure underlying the aggregate time series dynamics. Under social learning with finite memory, agents may be unable to learn the true structure of the economy and rather may chase spurious trends, destabilizing the actual aggregate dynamics. We explore the degree to which agents' forecasts are drawn toward a minimal state variable learning equilibrium as well as a weaker long-run consistency condition.


Author(s):  
Jeremiah Blocki ◽  
Nicolas Christin ◽  
Anupam Datta ◽  
Arunesh Sinha

2014 ◽  
Vol 238 ◽  
pp. 233-261 ◽  
Author(s):  
Max Kanovich ◽  
Tajana Ban Kirigin ◽  
Vivek Nigam ◽  
Andre Scedrov

2019 ◽  
Vol 115 ◽  
pp. 131-145 ◽  
Author(s):  
Gilad Bavly ◽  
Ron Peretz

2012 ◽  
Vol 78 (5) ◽  
pp. 1623-1636
Author(s):  
Lorenzo Carlucci ◽  
Sanjay Jain ◽  
Frank Stephan
Keyword(s):  

2014 ◽  
Vol 51 (3) ◽  
pp. 837-857
Author(s):  
K. Borovkov ◽  
G. Decrouez ◽  
M. Gilson

The paper deals with nonlinear Poisson neuron network models with bounded memory dynamics, which can include both Hebbian learning mechanisms and refractory periods. The state of the network is described by the times elapsed since its neurons fired within the post-synaptic transfer kernel memory span, and the current strengths of synaptic connections, the state spaces of our models being hierarchies of finite-dimensional components. We prove the ergodicity of the stochastic processes describing the behaviour of the networks, establish the existence of continuously differentiable stationary distribution densities (with respect to the Lebesgue measures of corresponding dimensionality) on the components of the state space, and find upper bounds for them. For the density components, we derive a system of differential equations that can be solved in a few simplest cases only. Approaches to approximate computation of the stationary density are discussed. One approach is to reduce the dimensionality of the problem by modifying the network so that each neuron cannot fire if the number of spikes it emitted within the post-synaptic transfer kernel memory span reaches a given threshold. We show that the stationary distribution of this ‘truncated’ network converges to that of the unrestricted network as the threshold increases, and that the convergence is at a superexponential rate. A complementary approach uses discrete Markov chain approximations to the network process.


2010 ◽  
Vol 36 (1) ◽  
pp. 1-30 ◽  
Author(s):  
William Schuler ◽  
Samir AbdelRahman ◽  
Tim Miller ◽  
Lane Schwartz

Human syntactic processing shows many signs of taking place within a general-purpose short-term memory. But this kind of memory is known to have a severely constrained storage capacity—possibly constrained to as few as three or four distinct elements. This article describes a model of syntactic processing that operates successfully within these severe constraints, by recognizing constituents in a right-corner transformed representation (a variant of left-corner parsing) and mapping this representation to random variables in a Hierarchic Hidden Markov Model, a factored time-series model which probabilistically models the contents of a bounded memory store over time. Evaluations of the coverage of this model on a large syntactically annotated corpus of English sentences, and the accuracy of a a bounded-memory parsing strategy based on this model, suggest this model may be cognitively plausible.


Sign in / Sign up

Export Citation Format

Share Document