Deep Reinforcement Learning as Foundation for Artificial General Intelligence

Author(s):  
Itamar Arel
2020 ◽  
Vol 11 (1) ◽  
pp. 70-85
Author(s):  
Samuel Allen Alexander

AbstractAfter generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways traditional reinforcement learning could be altered to remove this roadblock.


2021 ◽  
Author(s):  
Pamul Yadav ◽  
Taewoo Kim ◽  
Ho Suk ◽  
Junyong Lee ◽  
Hyeonseong Jeong ◽  
...  

<p>Faster adaptability to open-world novelties by intelligent agents is a necessary factor in achieving the goal of creating Artificial General Intelligence (AGI). Current RL framework does not considers the unseen changes (novelties) in the environment. Therefore, in this paper, we have proposed OODA-RL, a Reinforcement Learning based framework that can be used to develop robust RL algorithms capable of handling both the known environments as well as adaptation to the unseen environments. OODA-RL expands the definition of internal composition of the agent as compared to the abstract definition in the classical RL framework, allowing the RL researchers to incorporate novelty adaptation techniques as an add-on feature to the existing SoTA as well as yet-to-be-developed RL algorithms.</p>


2021 ◽  
Author(s):  
Pamul Yadav ◽  
Taewoo Kim ◽  
Ho Suk ◽  
Junyong Lee ◽  
Hyeonseong Jeong ◽  
...  

<p>Faster adaptability to open-world novelties by intelligent agents is a necessary factor in achieving the goal of creating Artificial General Intelligence (AGI). Current RL framework does not considers the unseen changes (novelties) in the environment. Therefore, in this paper, we have proposed OODA-RL, a Reinforcement Learning based framework that can be used to develop robust RL algorithms capable of handling both the known environments as well as adaptation to the unseen environments. OODA-RL expands the definition of internal composition of the agent as compared to the abstract definition in the classical RL framework, allowing the RL researchers to incorporate novelty adaptation techniques as an add-on feature to the existing SoTA as well as yet-to-be-developed RL algorithms.</p>


2021 ◽  
Author(s):  
Pamul Yadav ◽  
Taewoo Kim ◽  
Ho Suk ◽  
Junyong Lee ◽  
Hyeonseong Jeong ◽  
...  

<p>Faster adaptability to open-world novelties by intelligent agents is a necessary factor in achieving the goal of creating Artificial General Intelligence (AGI). Current RL framework does not considers the unseen changes (novelties) in the environment. Therefore, in this paper, we have proposed OODA-RL, a Reinforcement Learning based framework that can be used to develop robust RL algorithms capable of handling both the known environments as well as adaptation to the unseen environments. OODA-RL expands the definition of internal composition of the agent as compared to the abstract definition in the classical RL framework, allowing the RL researchers to incorporate novelty adaptation techniques as an add-on feature to the existing SoTA as well as yet-to-be-developed RL algorithms.</p>


2021 ◽  
pp. 1-6
Author(s):  
Scott McLean ◽  
Gemma J. M. Read ◽  
Jason Thompson ◽  
P. A. Hancock ◽  
Paul M. Salmon

Sign in / Sign up

Export Citation Format

Share Document