temporal difference methods
Recently Published Documents


TOTAL DOCUMENTS

20
(FIVE YEARS 2)

H-INDEX

9
(FIVE YEARS 0)

2020 ◽  
Vol 34 (04) ◽  
pp. 3701-3708
Author(s):  
Gal Dalal ◽  
Balazs Szorenyi ◽  
Gugan Thoppe

Policy evaluation in reinforcement learning is often conducted using two-timescale stochastic approximation, which results in various gradient temporal difference methods such as GTD(0), GTD2, and TDC. Here, we provide convergence rate bounds for this suite of algorithms. Algorithms such as these have two iterates, θn and wn, which are updated using two distinct stepsize sequences, αn and βn, respectively. Assuming αn = n−α and βn = n−β with 1 > α > β > 0, we show that, with high probability, the two iterates converge to their respective solutions θ* and w* at rates given by ∥θn - θ*∥ = Õ(n−α/2) and ∥wn - w*∥ = Õ(n−β/2); here, Õ hides logarithmic terms. Via comparable lower bounds, we show that these bounds are, in fact, tight. To the best of our knowledge, ours is the first finite-time analysis which achieves these rates. While it was known that the two timescale components decouple asymptotically, our results depict this phenomenon more explicitly by showing that it in fact happens from some finite time onwards. Lastly, compared to existing works, our result applies to a broader family of stepsizes, including non-square summable ones.


2020 ◽  
Vol 34 (04) ◽  
pp. 3741-3748
Author(s):  
Kristopher De Asis ◽  
Alan Chan ◽  
Silviu Pitis ◽  
Richard Sutton ◽  
Daniel Graves

We explore fixed-horizon temporal difference (TD) methods, reinforcement learning algorithms for a new kind of value function that predicts the sum of rewards over a fixed number of future time steps. To learn the value function for horizon h, these algorithms bootstrap from the value function for horizon h−1, or some shorter horizon. Because no value function bootstraps from itself, fixed-horizon methods are immune to the stability problems that plague other off-policy TD methods using function approximation (also known as “the deadly triad”). Although fixed-horizon methods require the storage of additional value functions, this gives the agent additional predictive power, while the added complexity can be substantially reduced via parallel updates, shared weights, and n-step bootstrapping. We show how to use fixed-horizon value functions to solve reinforcement learning problems competitively with methods such as Q-learning that learn conventional value functions. We also prove convergence of fixed-horizon temporal difference methods with linear and general function approximation. Taken together, our results establish fixed-horizon TD methods as a viable new way of avoiding the stability problems of the deadly triad.


2013 ◽  
Vol 43 (11) ◽  
pp. 2327-2338 ◽  
Author(s):  
D. W. Joyce ◽  
B. B. Averbeck ◽  
C. D. Frith ◽  
S. S. Shergill

BackgroundPeople with psychoses often report fixed, delusional beliefs that are sustained even in the presence of unequivocal contrary evidence. Such delusional beliefs are the result of integrating new and old evidence inappropriately in forming a cognitive model. We propose and test a cognitive model of belief formation using experimental data from an interactive ‘Rock Paper Scissors’ (RPS) game.MethodParticipants (33 controls and 27 people with schizophrenia) played a competitive, time-pressured interactive two-player game (RPS). Participants' behavior was modeled by a generative computational model using leaky integrator and temporal difference methods. This model describes how new and old evidence is integrated to form a playing strategy to beat the opponent and to provide a mechanism for reporting confidence in one's playing strategy to win against the opponent.ResultsPeople with schizophrenia fail to appropriately model their opponent's play despite consistent (rather than random) patterns that can be exploited in the simulated opponent's play. This is manifest as a failure to weigh existing evidence appropriately against new evidence. Furthermore, participants with schizophrenia show a ‘jumping to conclusions’ (JTC) bias, reporting successful discovery of a winning strategy with insufficient evidence.ConclusionsThe model presented suggests two tentative mechanisms in delusional belief formation: (i) one for modeling patterns in other's behavior, where people with schizophrenia fail to use old evidence appropriately, and (ii) a metacognitive mechanism for ‘confidence’ in such beliefs, where people with schizophrenia overweight recent reward history in deciding on the value of beliefs about the opponent.


Sign in / Sign up

Export Citation Format

Share Document