A Prospective on the Technological Singularity and Artificial Intelligence by a Retrospective on the Canons of Dort for Proposing the Reformed Canons of AI

2018 ◽  
Vol 59 (null) ◽  
pp. 85-132
Author(s):  
Jun, Daekyung
2020 ◽  
Vol 57 (2) ◽  
pp. 192-207
Author(s):  
Vladimir S. Smolin ◽  

The book by Max Tegmark draws attention to the dangers and benefits that await humanity as a result of the Artificial Intelligence (AI) technologies development. Cosmologist and astrophysicist Tegmark, realizing the impossibility to predict the AI development, offers exciting scenarios of civilization development options for tens, thousands, millions and billions of years. The analysis of the opposite scenarios is aimed at the idea formation that the consequences of creating a general AI, superior to the human level, will be more significant than from all other achievements of civilization. Tegmark is one of the founders and leaders of the “Beneficial AGI” movement, he presents the results of the discussion of the issues he raises with leading experts in the field of AI. Tegmark concludes his book with a call to optimism: “My book urge you to think about what future you would like, and not what future scares you, this way we can find goals for which it’s worth working”.


2021 ◽  
Author(s):  
Deep Bhattacharjee ◽  
Sanjeevan Singha Roy

<p>If in future, the highly intelligent machines control the world, then what would be its advantages and disadvantages? Will, those artificial intelligence powered superintelligent machines become an anathema for humanity or will they ease out the human works by guiding humans in complicated tasks, thereby extending a helping hand to the human works making them comfortable. Recent studies in theoretical computer science especially artificial intelligence predicted something called ‘technological singularity’ or the ‘intelligent explosion’ and if this happens then there can be a further stage as transfused machinery intelligence and actual intelligence where the machines being immensely powerful with a cognitive capacity more than that of humans for solving ‘immensely complicated tasks’ can takeover the humans and even the machines by more intelligent machines of superhuman intelligence. Therefore, it is troublesome and worry-full to think that ‘if in case the machines turned out against humans for their optimal domination in this planet’. Can humans have any chances to avoid them by bypassing the inevitable ‘hard singularity’ through a set of ‘soft singularity’. This paper discusses all the facts in details along with significant calculations showing humanity, how to avoid the hard singularity when the progress of intelligence is inevitable. </p>


2019 ◽  
Vol 16 (4) ◽  
pp. 340-348
Author(s):  
Ilya N. Volnov

The paper approaches the techno-humanitarian balance of physical (accelerating) and humanitarian (controlling) technologies. It demonstrates that the absence of the human in this ba­lance makes the idea about ensuring the socio-system’s sustainable development through the establishment of techno-humanitarian balance erroneous. The required adequate proportion of “powers” between the technologies in the techno-humanitarian balance necessitates the civilization to attempt to “harness” not only the technological singularity but the humanitarian singularity, too. It is shown that the techno-humanitarian balance in the singularity mode destroys the human physical and mental nature. The human is introduced into the binary technological balance through transforming it into a triple balance and adding the semantic technologies inherent to human beings. The triple balance is characterized by the oppositions between intelligence and thinking, information and meanings. The paper explores the triple balance and its edges in the context of the ultimate singularity. It is shown that the human being, through thinking and meanings, can correlate themselves with the semantic singularity (the infinity of the semantic field) — thus becoming Homo Singularity. These conclusions are substantiated through the V.V. Nalimov’s probabilistic model of consciousess, which also mathe­matically formalizes the process of semantic decapsulation of the personality and the personal interaction with the semantic vacuum (infinity).The paper introduces the concept of finite dilatation of the cultural semantic field and the big semantic transition as the era of formation of Homo Singula­rity and beginning of their practical work with semantic infinity. The paper provides examples of such practical work in the fields of art and science. Homo Singularity not only prevents their physical and mental nature from destruction but also keeps the po­werful artificial intelligence under control by counte­ring big data with their ability to integrate the multiple into the single (whole) and make a transition from the discrete level of information to a continuous (infinite) level of meanings.


2019 ◽  
Vol 2 (1) ◽  
pp. 68 ◽  
Author(s):  
Alexios Brailas

“What will happen when an artificial intelligence entity has access to all the information stored about me online, with the ability to process my information efficiently and flawlessly? Will such an entity not be, in fact, my ideal therapist?” Would there ever come a point at which you would put your trust in an omniscient, apperceptive, and ultra-intelligent robotic therapist? There is a horizon beyond which we can neither see nor even imagine; this is the technological singularity moment for psychotherapy. If human intelligence is capable of creating an artificial intelligence that surpasses its creators, then this intelligence would, in turn, be able to create an even superior next-generation intelligence. An inevitable positive feedback loop would lead to an exponential intelligence growth rate. In the present paper, we introduce the term Therapist Panoptes as a working hypothesis to investigate the implications for psychotherapy of an artificial therapeutic agent: one that is able to access all available data for a potential client and process these with an inconceivably superior intelligence. Although this opens a new perspective on the future of psychotherapy, the sensitive dependence of complex techno-social systems on their initial conditions renders any prediction impossible. Artificial intelligence and humans form a bio-techno-social system, and the evolution of the participating actors in this complex super-organism depends upon their individual action, as well as upon each actor being a coevolving part of a self-organized whole.


Author(s):  
Jose Luis Cordeiro

Technological convergence is accelerating and allowing humanity to move from slow and erratic biological evolution to fast and precise technological evolution. The expression “emerging technologies” is used to cover new and potentially powerful fields such as biotechnology, artificial intelligence, and nanotechnology. Although the expression might be somewhat ambiguous, several clusters of different technologies are advancing exponentially and will be critical to humanity's future. NBIC is a common abbreviation that stands for nanotechnology, biotechnology, information technology, and cognitive science. Other technologies like robotics, quantum computing, and space technologies can be added towards an accelerating “technological convergence” that might lead to a “technological singularity” as proposed by US engineer and futurist Ray Kurzweil. According to Kurzweil, we will reach a “technological singularity” by 2045, when we will be able to transcend many of our current limitations and move from biological humans to technological transhumans, both on planet Earth and beyond.


AI Magazine ◽  
2017 ◽  
Vol 38 (3) ◽  
pp. 58-62 ◽  
Author(s):  
Toby Walsh

There is both much optimisim and pessimism around artificial intelligence (AI) today. The optimists are investing millions of dollars, and even in some cases billions of dollars into AI. The pessimists, on the other hand, predict that AI will end many things: jobs, warfare, and even the human race. Both the optimists and the pessimists often appeal to the idea of a technological singularity, a point in time where machine intelligence starts to run away, and a new, more in- telligent “species” starts to inhabit the earth. If the optimists are right, this will be a moment that fundamentally changes our economy and our society. If the pessimists are right, this will be a moment that also fundamentally changes our economy and our society. It is therefore very worthwhile spending some time deciding if either of them might be right.


2020 ◽  
Vol 11 (4) ◽  
pp. 334-346
Author(s):  
Aura Elena Schussler ◽  

The present study focuses on the situation in which mind-reading machines will be connected, initially through the incorporation of weak AI, and then in conjunction to strong AI, an aspect that, ongoing, will no longer have a simple medical role, as is the case at present, but one of surveillance and monitoring of individuals—an aspect that is heading us towards a future techno-panoptic singularity. Thus, the general objective of this paper raises the problem of the ontological stability of human nature which, within the limits of the technological singularity of mind-reading machines, leads to the loss of autonomy and a reduction in freedom when it comes to human thoughts. In this paradigm, the hypothesis of a future era of technological singularity is prefigured to be a cumulation of factors in which artificial intelligence holds a dominant position in relation to the human agent, in a techno-panoptic system of human supervision, in the form of a new world order of manifestation/imposition of power—that of a “singleton.” The theoretical objective analyzes the phenomenon of “deterritorrialization” (Deleuze & Guattari, 2000, 2005) of the Foucauldian panoptic mechanism (Foucault, 1995, 2003, 2006, 2008)—which is based on the “biopolitical” system of “biopower”—and its “reterritorialization” in the “territory” of the techno-panoptic singularity, where the scenario of a strong AI “singleton” (Bostrom, 2004, 2006), represents the alienation of the Being into a hard technological determinism.


2021 ◽  
Author(s):  
Deep Bhattacharjee ◽  
Sanjeevan Singha Roy

<p>If in future, the highly intelligent machines control the world, then what would be its advantages and disadvantages? Will, those artificial intelligence powered superintelligent machines become an anathema for humanity or will they ease out the human works by guiding humans in complicated tasks, thereby extending a helping hand to the human works making them comfortable. Recent studies in theoretical computer science especially artificial intelligence predicted something called ‘technological singularity’ or the ‘intelligent explosion’ and if this happens then there can be a further stage as transfused machinery intelligence and actual intelligence where the machines being immensely powerful with a cognitive capacity more than that of humans for solving ‘immensely complicated tasks’ can takeover the humans and even the machines by more intelligent machines of superhuman intelligence. Therefore, it is troublesome and worry-full to think that ‘if in case the machines turned out against humans for their optimal domination in this planet’. Can humans have any chances to avoid them by bypassing the inevitable ‘hard singularity’ through a set of ‘soft singularity’. This paper discusses all the facts in details along with significant calculations showing humanity, how to avoid the hard singularity when the progress of intelligence is inevitable. </p>


Sign in / Sign up

Export Citation Format

Share Document