scholarly journals Global Solutions vs. Local Solutions for the AI Safety Problem

2019 ◽  
Vol 3 (1) ◽  
pp. 16 ◽  
Author(s):  
Alexey Turchin ◽  
David Denkenberger ◽  
Brian Green

There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided into four groups: 1. No AI: AGI technology is banned or its use is otherwise prevented; 2. One AI: the first superintelligent AI is used to prevent the creation of any others; 3. Net of AIs as AI police: a balance is created between many AIs, so they evolve as a net and can prevent any rogue AI from taking over the world; 4. Humans inside AI: humans are augmented or part of AI. We explore many ideas, both old and new, regarding global solutions for AI safety. They include changing the number of AI teams, different forms of “AI Nanny” (non-self-improving global control AI system able to prevent creation of dangerous AIs), selling AI safety solutions, and sending messages to future AI. Not every local solution scales to a global solution or does it ethically and safely. The choice of the best local solution should include understanding of the ways in which it will be scaled up. Human-AI teams or a superintelligent AI Service as suggested by Drexler may be examples of such ethically scalable local solutions, but the final choice depends on some unknown variables such as the speed of AI progress.

2017 ◽  
Author(s):  
Gopal P. Sarma ◽  
Nick J. Hay ◽  
Adam Safron

We propose the creation of a systematic effort to identify and replicate key findings in neuroscience and allied fields related to understanding human values. Our aim is to ensure that research underpinning the value alignment problem of artificial intelligence has been sufficiently validated to play a role in the design of AI systems.


2002 ◽  
Vol 01 (03) ◽  
pp. A04 ◽  
Author(s):  
Silvana Barbacci

This work analyses how the theme of the creation of thinking machines by man, particularly through artificial intelligence, is dealt with on stage, with reference to three plays addressing different topics and characterised by different types of performance. This analysis reveals the particular effectiveness of plays dealing with scientific topics, when the relationship between theatre and science results in reflections transcending the boundaries of its contents to address man and his essence and gives voice to the ancient question of the sense of the world.


Author(s):  
Konstantin kolin

The capabilities of machine translation are closely related to the improvement of modeling the processes of understanding and generating texts in natural language, which traditionally belongs to the class of artificial intelligence problems. The article attempts to analyze the main approaches to the creation of machine translation technologies. It is concluded that these approaches have not yet provide for the formation and use of dynamic models of the world, but are moving mainly in the direction of a grammatically consistent translation of word sequences.


2020 ◽  
Vol 26 (8) ◽  
pp. 69-76
Author(s):  
V. Blanutsa ◽  

The state policy of artificial intelligence development in Russia is based on the national strategy approved in 2019 and valid until 2030. To understand the specifics of Russian policy, a national strategy was chosen as the object of research, and the subject of research was declared and latent strategic goals. The study is aimed at assessing the degree of correspondence between the strategic goals of state policy and modern concepts of artificial intelligence development. For the automatic analysis of the texts of the national strategy, similar foreign documents and the global array of publications, content analysis was used. The eight largest bibliographic databases have identified many original scientific articles on artificial intelligence. Content analysis of this array made it possible to identify six approaches (algorithmic, test, cognitive, landscape, explanatory and heuristic) to the construction of a concept for the development of artificial intelligence. The latter approach is the most end-to-end, allowing generalizing the rest of the approaches. Further analysis was carried out on the basis of a heuristic approach, within which the concepts of narrow, general and super intelligence are highlighted. The text of the national strategy was analyzed for compliance with the three concepts. It was found that the goals announced in the national strategy refer to the concept of artificial narrow intelligence. Analysis of the frequency of occurrence of terms in the strategy revealed latent goals (access to big data and software) that belong to the same concept. The study of the context of several cases of mentioning artificial general intelligence in the strategy only confirmed the general focus on the development of artificial narrow intelligence. The leading countries in the analyzed area are characterized by a strategic focus on the development of technologies for artificial general intelligence and scientific research on artificial superintelligence. The approximate time lag of the Russian strategy from the creation of artificial general intelligence has been determined. To overcome this lag and Russia occupy a leading position in the world, it was proposed to develop a new national strategy for the creation of artificial superintelligence technologies in the period up to 2050


2019 ◽  
Vol 10 (2) ◽  
Author(s):  
Zarina Khisamova ◽  
Ildar Begishev

In today's digital space, the use of artificial intelligence (hereinafter-AI) and the development of intelligent technologies are extremely important and relevant. Over the past few years, there have been attempts to regulate AI, both in Russia and in other countries of the world. Among the currently existing approaches, the most optimal one is the creation of a separate legal regulation mechanism that creates a clear delineation of areas of responsibility between developers and users of systems with AI and the technology itself. A separate direction should be the introduction of uniform for all developers and users of ethical principles for systems with AI. The most optimal in this aspect is the approach implemented in the framework of Asilomar principles. In these circumstances, the appeal to the problem of legal regulation of AI becomes more relevant than ever. This article presents the results of a detailed analysis of existing approaches to the legal regulation of AI.


Journalism ◽  
2020 ◽  
pp. 146488492094753
Author(s):  
J Scott Brennen ◽  
Philip N Howard ◽  
Rasmus K Nielsen

Drawing on scholarship in journalism studies and the sociology of expectations, this article demonstrates how news media shape, mediate, and amplify expectations surrounding artificial intelligence in ways that influence their potential to intervene in the world. Through a critical discourse analysis of news content, this article describes and interrogates the persistent expectation concerning the widescale social integration of AI-related approaches and technologies. In doing so, it identifies two techniques through which news outlets mediate future-oriented expectations surrounding AI: choosing sources and offering comparisons. Finally, it demonstrates how in employing these techniques, outlets construct the expectation of a pseudo-artificial general intelligence: a collective of technologies capable of solving nearly any problem.


In the modern digital age, the issues of using artificial intelligence and the field of development of intelligent technologies are extremely important and relevant. Over the past few years, there have been attempts of state regulation of artificial intelligence, both in Russia and in other countries of the world. Artificial intelligence poses new challenges to various areas of law: from patent to criminal law, from privacy to antitrust law. Among the current approaches, the most optimal is the creation of a separate legal regulation mechanism that creates a clear distinction between areas of responsibility of developers and users of systems with artificial intelligence and the technology itself. Today, the development of the legal framework for the existence of artificial intelligence can be conditionally divided into two approaches: the creation of a legal framework for the introduction of applied systems with artificial intelligence and stimulate their development; regulation of the sphere of creating artificial “super intelligence”, in particular, compliance of the developed technologies with generally recognized standards in the field of ethics and law. A separate area should be the introduction of uniform ethical principles for all developers and users of systems with artificial intelligence. The most optimal in this aspect is the approach implemented within the framework of the Asilomar principles. In these circumstances, the appeal to the problem of legal regulation of artificial intelligence is becoming more relevant than ever. This paper presents the results of a detailed analysis of existing approaches to the legal regulation of artificial intelligence.


AI Magazine ◽  
2018 ◽  
Vol 39 (3) ◽  
pp. 27-39
Author(s):  
Sean McGregor ◽  
Amir Banifatemi

The IBM Watson AI XPRIZE is a four-year competition where teams work to improve the world with artificial intelligence. The competition began in 2017 with 148 problem domains in sustainability, artificial general intelligence, education, and a variety of other grand challenge areas. 59 teams advanced to the second year of the competition and ten teams earned special recognition as “milestone nominees.” The properties of the advancing problem domains highlight opportunities and challenges for the “AI for Good” movement. We detail the judging process and highlight preliminary results from cutting the field of competing teams.


Discourse ◽  
2020 ◽  
Vol 6 (3) ◽  
pp. 109-117
Author(s):  
O. M. Polyakov

Introduction. The article continues the series of publications on the linguistics of relations (hereinafter R–linguistics) and is devoted to an introduction to the logic of natural language in relation to the approach considered in the series. The problem of natural language logic still remains relevant, since this logic differs significantly from traditional mathematical logic. Moreover, with the appearance of artificial intelligence systems, the importance of this problem only increases. The article analyzes logical problems that prevent the application of classical logic methods to natural languages. This is possible because R-linguistics forms the semantics of a language in the form of world model structures in which language sentences are interpreted.Methodology and sources. The results obtained in the previous parts of the series are used as research tools. To develop the necessary mathematical representations in the field of logic and semantics, the formulated concept of the interpretation operator is used.Results and discussion. The problems that arise when studying the logic of natural language in the framework of R–linguistics are analyzed. These issues are discussed in three aspects: the logical aspect itself; the linguistic aspect; the aspect of correlation with reality. A very General approach to language semantics is considered and semantic axioms of the language are formulated. The problems of the language and its logic related to the most General view of semantics are shown.Conclusion. It is shown that the application of mathematical logic, regardless of its type, to the study of natural language logic faces significant problems. This is a consequence of the inconsistency of existing approaches with the world model. But it is the coherence with the world model that allows us to build a new logical approach. Matching with the model means a semantic approach to logic. Even the most General view of semantics allows to formulate important results about the properties of languages that lack meaning. The simplest examples of semantic interpretation of traditional logic demonstrate its semantic problems (primarily related to negation).


Author(s):  
Roberto D. Hernández

This article addresses the meaning and significance of the “world revolution of 1968,” as well as the historiography of 1968. I critically interrogate how the production of a narrative about 1968 and the creation of ethnic studies, despite its world-historic significance, has tended to perpetuate a limiting, essentialized and static notion of “the student” as the primary actor and an inherent agent of change. Although students did play an enormous role in the events leading up to, through, and after 1968 in various parts of the world—and I in no way wish to diminish this fact—this article nonetheless argues that the now hegemonic narrative of a student-led revolt has also had a number of negative consequences, two of which will be the focus here. One problem is that the generation-driven models that situate 1968 as a revolt of the young students versus a presumably older generation, embodied by both their parents and the dominant institutions of the time, are in effect a sociosymbolic reproduction of modernity/coloniality’s logic or driving impulse and obsession with newness. Hence an a priori valuation is assigned to the new, embodied in this case by the student, at the expense of the presumably outmoded old. Secondly, this apparent essentializing of “the student” has entrapped ethnic studies scholars, and many of the period’s activists (some of whom had been students themselves), into said logic, thereby risking the foreclosure of a politics beyond (re)enchantment or even obsession with newness yet again.


Sign in / Sign up

Export Citation Format

Share Document