Journal of Artificial General Intelligence
Latest Publications


TOTAL DOCUMENTS

58
(FIVE YEARS 8)

H-INDEX

9
(FIVE YEARS 2)

Published By De Gruyter Open Sp. Z O.O.

1946-0163

2021 ◽  
Vol 12 (1) ◽  
pp. 87-110
Author(s):  
Wladimir Stalski

Abstract On the basis of the author’s earlier works, the article proposes a new approach to creating an artificial intellect system in a model of a human being that is presented as the unification of an intellectual agent and a humanoid robot (ARb). In accordance with the proposed new approach, the development of an artificial intellect is achieved by teaching a natural language to an ARb, and by its utilization for communication with ARbs and humans, as well as for reflections. A method is proposed for the implementation of the approach. Within the framework of that method, a human model is “brought up” like a child, in a collective of automatons and children, whereupon an ARb must master a natural language and reflection, and possess self-awareness. Agent robots (ARbs) propagate and their population evolves; that is ARbs develop cognitively from generation to generation. ARbs must perform the tasks they were given, such as computing, whereupon they are then assigned time for “private life” for improving their education as well as for searching for partners for propagation. After having received an education, every agent robot may be viewed as a “person” who is capable of activities that contain elements of creativity. The development of ARbs thanks to the evolution of their population, education, and personal “life” experience, including “work” experience, which is mastered in a collective of humans and automatons.


2021 ◽  
Vol 12 (1) ◽  
pp. 1-25
Author(s):  
Samuel Alexander ◽  
Bill Hibbard

AbstractIn 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we survey other methods for measuring function growth rates, and exhibit the resulting Hibbard-like intelligence measures and taxonomies. Of particular interest, we obtain intelligence taxonomies based on Big-O and Big-Theta notation systems, which taxonomies are novel in that they challenge conventional notions of what an intelligence measure should look like. We discuss how intelligence measurement of sequence predictors can indirectly serve as intelligence measurement for agents with Artificial General Intelligence (AGIs).


2021 ◽  
Vol 12 (1) ◽  
pp. 71-86
Author(s):  
Marcus Hutter

Abstract The Feature Markov Decision Processes ( MDPs) model developed in Part I (Hutter, 2009b) is well-suited for learning agents in general environments. Nevertheless, unstructured (Φ)MDPs are limited to relatively simple environments. Structured MDPs like Dynamic Bayesian Networks (DBNs) are used for large-scale real-world problems. In this article I extend ΦMDP to ΦDBN. The primary contribution is to derive a cost criterion that allows to automatically extract the most relevant features from the environment, leading to the “best” DBN representation. I discuss all building blocks required for a complete general learning algorithm, and compare the novel ΦDBN model to the prevalent POMDP approach.


2021 ◽  
Vol 12 (1) ◽  
pp. 26-70
Author(s):  
H. Georg Schulze

Abstract Thinking machines must be able to use language effectively in communication with humans. It requires from them the ability to generate meaning and transfer this meaning to a communicating partner. Machines must also be able to decode meaning communicated via language. This work is about meaning in the context of building an artificial general intelligent system. It starts with an analysis of the Turing test and some of the main approaches to explain meaning. It then considers the generation of meaning in the human mind and argues that meaning has a dual nature. The quantum component reflects the relationships between objects and the orthogonal quale component the value of these relationships to the self. Both components are necessary, simultaneously, for meaning to exist. This parallel existence permits the formulation of ‘meaning coordinates’ as ordered pairs of quantum and quale strengths. Meaning coordinates represent the contents of meaningful mental states. Spurred by a currently salient meaningful mental state in the speaker, language is used to induce a meaningful mental state in the hearer. Therefore, thinking machines must be able to produce and respond to meaningful mental states in ways similar to their functioning in humans. It is explained how quanta and qualia arise, how they generate meaningful mental states, how these states propagate to produce thought, how they are communicated and interpreted, and how they can be simulated to create thinking machines.


2020 ◽  
Vol 11 (2) ◽  
pp. 1-100 ◽  
Author(s):  
Dagmar Monett ◽  
Colin W. P. Lewis ◽  
Kristinn R. Thórisson ◽  
Joscha Bach ◽  
Gianluca Baldassarre ◽  
...  

2020 ◽  
Vol 11 (1) ◽  
pp. 70-85
Author(s):  
Samuel Allen Alexander

AbstractAfter generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways traditional reinforcement learning could be altered to remove this roadblock.


2020 ◽  
Vol 11 (1) ◽  
pp. 1-37
Author(s):  
Claes Strannegård ◽  
Wen Xu ◽  
Niklas Engsner ◽  
John A. Endler

AbstractAlthough animals such as spiders, fish, and birds have very different anatomies, the basic mechanisms that govern their perception, decision-making, learning, reproduction, and death have striking similarities. These mechanisms have apparently allowed the development of general intelligence in nature. This led us to the idea of approaching artificial general intelligence (AGI) by constructing a generic artificial animal (animat) with a configurable body and fixed mechanisms of perception, decision-making, learning, reproduction, and death. One instance of this generic animat could be an artificial spider, another an artificial fish, and a third an artificial bird. The goal of all decision-making in this model is to maintain homeostasis. Thus actions are selected that might promote survival and reproduction to varying degrees. All decision-making is based on knowledge that is stored in network structures. Each animat has two such network structures: a genotype and a phenotype. The genotype models the initial nervous system that is encoded in the genome (“the brain at birth”), while the phenotype represents the nervous system in its present form (“the brain at present”). Initially the phenotype and the genotype coincide, but then the phenotype keeps developing as a result of learning, while the genotype essentially remains unchanged. The model is extended to ecosystems populated by animats that develop continuously according to fixed mechanisms for sexual or asexual reproduction, and death. Several examples of simple ecosystems are given. We show that our generic animat model possesses general intelligence in a primitive form. In fact, it can learn simple forms of locomotion, navigation, foraging, language, and arithmetic.


2020 ◽  
Vol 11 (1) ◽  
pp. 38-69 ◽  
Author(s):  
Ryan J. McCall ◽  
Stan Franklin ◽  
Usef Faghihi ◽  
Javier Snaider ◽  
Sean Kugele

AbstractNatural selection has imbued biological agents with motivations moving them to act for survival and reproduction, as well as to learn so as to support both. Artificial agents also require motivations to act in a goal-directed manner and to learn appropriately into various memories. Here we present a biologically inspired motivation system, based on feelings (including emotions) integrated within the LIDA cognitive architecture at a fundamental level. This motivational system, operating within LIDA’s cognitive cycle, provides a repertoire of motivational capacities operating over a range of time scales of increasing complexity. These include alarms, appraisal mechanisms, appetence and aversion, and deliberation and planning.


2019 ◽  
Vol 10 (1) ◽  
pp. 24-45
Author(s):  
Samuel Allen Alexander

Abstract Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg-Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose an elegant answer based on the following insight: we can view Legg-Hutter agents as candidates in an election, whose voters are environments, letting each environment vote (via its rewards) which agent (if either) is more intelligent. This leads to an abstract family of comparators simple enough that we can prove some structural theorems about them. It is an open question whether these structural theorems apply to more practical intelligence measures.


2019 ◽  
Vol 10 (2) ◽  
pp. 1-37 ◽  
Author(s):  
Pei Wang

Abstract This article systematically analyzes the problem of defining “artificial intelligence.” It starts by pointing out that a definition influences the path of the research, then establishes four criteria of a good working definition of a notion: being similar to its common usage, drawing a sharp boundary, leading to fruitful research, and as simple as possible. According to these criteria, the representative definitions in the field are analyzed. A new definition is proposed, according to it intelligence means “adaptation with insufficient knowledge and resources.” The implications of this definition are discussed, and it is compared with the other definitions. It is claimed that this definition sheds light on the solution of many existing problems and sets a sound foundation for the field.


Sign in / Sign up

Export Citation Format

Share Document