artificial general intelligence
Recently Published Documents


TOTAL DOCUMENTS

182
(FIVE YEARS 99)

H-INDEX

13
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Pamul Yadav ◽  
Taewoo Kim ◽  
Ho Suk ◽  
Junyong Lee ◽  
Hyeonseong Jeong ◽  
...  

<p>Faster adaptability to open-world novelties by intelligent agents is a necessary factor in achieving the goal of creating Artificial General Intelligence (AGI). Current RL framework does not considers the unseen changes (novelties) in the environment. Therefore, in this paper, we have proposed OODA-RL, a Reinforcement Learning based framework that can be used to develop robust RL algorithms capable of handling both the known environments as well as adaptation to the unseen environments. OODA-RL expands the definition of internal composition of the agent as compared to the abstract definition in the classical RL framework, allowing the RL researchers to incorporate novelty adaptation techniques as an add-on feature to the existing SoTA as well as yet-to-be-developed RL algorithms.</p>


2021 ◽  
Author(s):  
Pamul Yadav ◽  
Taewoo Kim ◽  
Ho Suk ◽  
Junyong Lee ◽  
Hyeonseong Jeong ◽  
...  

<p>Faster adaptability to open-world novelties by intelligent agents is a necessary factor in achieving the goal of creating Artificial General Intelligence (AGI). Current RL framework does not considers the unseen changes (novelties) in the environment. Therefore, in this paper, we have proposed OODA-RL, a Reinforcement Learning based framework that can be used to develop robust RL algorithms capable of handling both the known environments as well as adaptation to the unseen environments. OODA-RL expands the definition of internal composition of the agent as compared to the abstract definition in the classical RL framework, allowing the RL researchers to incorporate novelty adaptation techniques as an add-on feature to the existing SoTA as well as yet-to-be-developed RL algorithms.</p>


2021 ◽  
Author(s):  
Pamul Yadav ◽  
Taewoo Kim ◽  
Ho Suk ◽  
Junyong Lee ◽  
Hyeonseong Jeong ◽  
...  

<p>Faster adaptability to open-world novelties by intelligent agents is a necessary factor in achieving the goal of creating Artificial General Intelligence (AGI). Current RL framework does not considers the unseen changes (novelties) in the environment. Therefore, in this paper, we have proposed OODA-RL, a Reinforcement Learning based framework that can be used to develop robust RL algorithms capable of handling both the known environments as well as adaptation to the unseen environments. OODA-RL expands the definition of internal composition of the agent as compared to the abstract definition in the classical RL framework, allowing the RL researchers to incorporate novelty adaptation techniques as an add-on feature to the existing SoTA as well as yet-to-be-developed RL algorithms.</p>


2021 ◽  
Author(s):  
valeria seidita ◽  
francesco lanza ◽  
Patrick Hammer ◽  
Antonio Chella ◽  
Pei Wang

This work explore the possibility to combine the Jason reasoning cycle with a Non-Axiomatic Reasoning System (NARS) to develop multi-agent systems that are able to reason, deliberate and plan when information about plans to be executed and goals to be pursued is missing or incomplete. The contribution of this work is a method for BDI agents to create high-level plans using an AGI (Artificial General Intelligence) system based on non-axiomatic logic.


2021 ◽  
Author(s):  
valeria seidita ◽  
francesco lanza ◽  
Patrick Hammer ◽  
Antonio Chella ◽  
Pei Wang

This work explore the possibility to combine the Jason reasoning cycle with a Non-Axiomatic Reasoning System (NARS) to develop multi-agent systems that are able to reason, deliberate and plan when information about plans to be executed and goals to be pursued is missing or incomplete. The contribution of this work is a method for BDI agents to create high-level plans using an AGI (Artificial General Intelligence) system based on non-axiomatic logic.


Philosophies ◽  
2021 ◽  
Vol 6 (4) ◽  
pp. 83
Author(s):  
Kristen Carlson

Methods are currently lacking to prove artificial general intelligence (AGI) safety. An AGI ‘hard takeoff’ is possible, in which first generation AGI1 rapidly triggers a succession of more powerful AGIn that differ dramatically in their computational capabilities (AGIn << AGIn+1). No proof exists that AGI will benefit humans or of a sound value-alignment method. Numerous paths toward human extinction or subjugation have been identified. We suggest that probabilistic proof methods are the fundamental paradigm for proving safety and value-alignment between disparately powerful autonomous agents. Interactive proof systems (IPS) describe mathematical communication protocols wherein a Verifier queries a computationally more powerful Prover and reduces the probability of the Prover deceiving the Verifier to any specified low probability (e.g., 2−100). IPS procedures can test AGI behavior control systems that incorporate hard-coded ethics or value-learning methods. Mapping the axioms and transformation rules of a behavior control system to a finite set of prime numbers allows validation of ‘safe’ behavior via IPS number-theoretic methods. Many other representations are needed for proving various AGI properties. Multi-prover IPS, program-checking IPS, and probabilistically checkable proofs further extend the paradigm. In toto, IPS provides a way to reduce AGIn ↔ AGIn+1 interaction hazards to an acceptably low level.


2021 ◽  
Author(s):  
Andy E Williams

This paper explores how Human-Centric Functional Modeling might provide a method of systems thinking that in combination with models of Artificial General Intelligence and General Collective Intelligence developed using the approach, creates the opportunity to exponentially increase impact on targeted outcomes of collective activities, including research in a wide variety of disciplines as well as activities involved in addressing the various existential challenges facing mankind. Whether exponentially increasing the speed and scale of progress in research disciplines such as physics or medicine, or whether exponentially increasing capacity to solve existential challenges such as poverty or climate change, this paper explores why gaining the capacity to reliably solve such challenges might require this exponential increase in general problem-solving ability, why this exponential increase in ability might be reliably achievable through this approach, and why solving our most existential challenges might be reliably unachievable otherwise.


Sign in / Sign up

Export Citation Format

Share Document