scholarly journals Defining Functional Models of Artificial Intelligence Solutions to Create a Library that an Artificial General Intelligence can use to Increase General Problem Solving Ability

2020 ◽  
Author(s):  
Andy E Williams

The AI industry continues to enjoy robust growth. With the growing number of AI algorithms, the question becomes how to leverage all these models intelligently in a way that reliably converges on AGI. One approach is to gather all these models ingo a single library that a system of artificial intelligence might use to increase it's general problem solving ability. This paper explores the requirements for building such a library, the requirements for that library to be searchable for AI algorithms that might have the capacity to significantly increase impact on any given problem, and the requirements for the use of that library to reliably converge on AGI. This paper also explores the importance to such an effort of defining a common set of semantic functional building blocks that AI models can be represented in terms of. In particular, how that functional decomposition might be used to organize large scale cooperation to create such an AI library, where that cooperation has not yet proved possible otherwise. And how such collaboration, as well as how such a library, might significantly increase the impact of each AI and AGI researcher’s work.

2021 ◽  
Author(s):  
Andy E Williams

This paper explores how the technique of Human-Centric Functional Modeling might potentially be used to represent a broad subset of proposed implementations of biocomputing with anywhere from narrow to general problem-solving ability within a given domain, or across multiple domains, and how such functional models might be implemented by libraries of biological computing mechanisms. This paper also explores the insights to be gained from modeling biocomputers this way, and how Human-Centric Functional Modeling might significantly accelerate research and increase the impact of research in biocomputing through significantly increasing capacity for reuse of both biocomputing hardware and software.


2021 ◽  
Author(s):  
Andy E Williams

Considering both current narrow AI, and any Artificial General Intelligence (AGI) that might be implemented in the future, there are two categories of ways such systems might be made safe for the human beings that interact with them. One category consists of mechanisms that are internal to the system, and the other category consists of mechanisms that are external to the system. In either case, the complexity of the behaviours that such systems might be capable of can rise to the point at which such measures cannot be reliably implemented. However, General Collective Intelligence or GCI can exponentially increase the general problem-solving ability of groups, and therefore their ability to manage complexity. This paper explores the specific cases in which AI or AGI safety cannot be reliably assured without GCI.


2020 ◽  
Author(s):  
Andy E Williams

General Collective Intelligence or GCI has been predicted to create the potential for an exponential increase in the problem-solving capacity of the group, as compared to the problem-solving capacity of any individual in the group. A functional model of cognition proposed to represent the complete set of human cognitive functions, and therefore to have the capacity for human-like general problem-solving ability has recently been developed. This functional model suggests a methodical path by which implementing a working Artificial General Intelligence (AGI) or a working General Collective Intelligence might reliably be achievable. This paper explores the claim that there are no other reliable paths to AGI currently known, and explores why this one known path might require an exponential increase in the general problem-solving ability of any group of individuals to be reliably implementable. And why therefore, AGI might require GCI to be reliably achievable.


2021 ◽  
Author(s):  
Andy E Williams

Natural systems have demonstrated the ability to solve a wide range of adaptive problems as well as the ability to self-assemble in a self-sustaining way that enables them to exponentially increase impact on outcomes related to those problems. In the case of photosynthesis nature solved the problem of harnessing the energy in sunlight and then leveraged self-assembling and self-sustaining processes so that exponentially increasing impact on that problem is reliably achievable. Rather than having to budget a given amount of resources to create a mature tree, where those resources might not be reliably available, tree seedlings self-assemble in a self-sustaining way from very few resources to grow from having the capability of photosynthesis accompanying a single leaf, to the capability of photosynthesis accompanying what might be millions of leaves. If the patterns underlying this adaptive problem-solving could be abstracted so that they are generally applicable, they might be applied to social and other problems occurring at scales that currently are not reliably solvable. One is the Sustainable Development Goals (SDGs) funding gap. The funding believed to be required to address the SDGs is difficult to estimate, and may be anywhere between $2 trillion and $6 trillion USD per year. However, bridging the gap between the funding required to meet these goals and the funding available to do so is universally acknowledged to be a difficult and unsolved problem. This paper explores how abstracting the pattern for general problem-solving ability that nature has used to solve the problem of exponentially increasing impact on collective problems, and that nature has proven to be effective for billions of years, might be reused to solve “wicked problems” from implementing an Artificial General Intelligence (AGI) to funding sustainable development at the scale required to transform Africa and the world.


Author(s):  
Carlos Montemayor

Contemporary debates on Artificial General Intelligence (AGI) center on what philosophers classify as descriptive issues. These issues concern the architecture and style of information processing required for multiple kinds of optimal problem-solving. This paper focuses on two topics that are central to developing AGI regarding normative, rather than descriptive, requirements for AGIs epistemic agency and responsibility. The first is that a collective kind of epistemic agency may be the best way to model AGI. This collective approach is possible only if solipsistic considerations concerning phenomenal consciousness are ignored, thereby focusing on the cognitive foundation that attention and access consciousness provide for collective rationality and intelligence. The second is that joint attention and motivation are essential for AGI in the context of linguistic artificial intelligence. Focusing on GPT-3, this paper argues that without a satisfactory solution to this second normative issue regarding joint attention and motivation, there cannot be genuine AGI, particularly in conversational settings.


2021 ◽  
Author(s):  
Andy E Williams

This paper explores how Human-Centric Functional Modeling might provide a method of systems thinking that in combination with models of Artificial General Intelligence and General Collective Intelligence developed using the approach, creates the opportunity to exponentially increase impact on targeted outcomes of collective activities, including research in a wide variety of disciplines as well as activities involved in addressing the various existential challenges facing mankind. Whether exponentially increasing the speed and scale of progress in research disciplines such as physics or medicine, or whether exponentially increasing capacity to solve existential challenges such as poverty or climate change, this paper explores why gaining the capacity to reliably solve such challenges might require this exponential increase in general problem-solving ability, why this exponential increase in ability might be reliably achievable through this approach, and why solving our most existential challenges might be reliably unachievable otherwise.


2021 ◽  
Author(s):  
Andy E Williams

Human-Centric Functional Modeling (HCFM) has recently been used to define a model of Artificial General Intelligence (AGI) believed to have the capacity for human-like general problem-solving ability (intelligence), as well as a model of General Collective Intelligence (GCI) with the potential to combine individuals into a single collective intelligence that might have exponentially greater general problem-solving ability than any individual in the group. Functional modeling decouples the components of complex systems like cognition through well-defined interfaces so that they can be implemented separately, thereby breaking down the complex problem of implementing such a system into a number of much simpler problems. This paper explores how a rudimentary AGI and a rudimentary GCI might be implemented through approximating the functions of each, in order to create systems that provide sufficient value to incentivize more sophisticated implementations to be developed over time.


2019 ◽  
Author(s):  
A. M. Khalili

The dream of building machines that have human-level intelligence has inspired scientists for decades. Remarkable advances have been made recently; however, we are still far from achieving this goal. In this paper, I propose an alternative perspective on how these machines might be built focusing on the scientific discovery process which represents one of our highest abilities that requires a high level of reasoning and remarkable problem-solving ability. By trying to replicate the procedures followed by many scientists, the basic idea of the proposed approach is to use a set of principles to solve problems and discover new knowledge. These principles are extracted from different historical examples of scientific discoveries. Building machines that fully incorporate these principles in an automated way might open the doors for many advancements.


AI Magazine ◽  
2013 ◽  
Vol 34 (2) ◽  
pp. 107 ◽  
Author(s):  
Michael Genesereth ◽  
Yngvi Björnsson

Games have played a prominent role as a test-bed for advancements in the field of Artificial Intelligence ever since its foundation over half a century ago, resulting in highly specialized world-class game-playing systems being developed for various games. The establishment of the International General Game Playing Competition in 2005, however, resulted in a renewed interest in more general problem solving approaches to game playing. In general game playing (GGP) the goal is to create game-playing systems that autonomously learn how to skillfully play a wide variety of games, given only the descriptions of the game rules. In this paper we review the history of the competition, discuss progress made so far, and list outstanding research challenges.


Information ◽  
2018 ◽  
Vol 9 (12) ◽  
pp. 332 ◽  
Author(s):  
Paul Walton

Artificial intelligence (AI) and machine learning promise to make major changes to the relationship of people and organizations with technology and information. However, as with any form of information processing, they are subject to the limitations of information linked to the way in which information evolves in information ecosystems. These limitations are caused by the combinatorial challenges associated with information processing, and by the tradeoffs driven by selection pressures. Analysis of the limitations explains some current difficulties with AI and machine learning and identifies the principles required to resolve the limitations when implementing AI and machine learning in organizations. Applying the same type of analysis to artificial general intelligence (AGI) highlights some key theoretical difficulties and gives some indications about the challenges of resolving them.


Sign in / Sign up

Export Citation Format

Share Document