Deceitful Media

Author(s):  
Simone Natale

Artificial intelligence (AI) is often discussed as something extraordinary, a dream—or a nightmare—that awakens metaphysical questions on human life. Yet far from a distant technology of the future, the true power of AI lies in its subtle revolution of ordinary life. From voice assistants like Siri to natural language processors, AI technologies use cultural biases and modern psychology to fit specific characteristics of how users perceive and navigate the external world, thereby projecting the illusion of intelligence. Integrating media studies, science and technology studies, and social psychology, Deceitful Media examines the rise of artificial intelligence throughout history and exposes the very human fallacies behind this technology. Focusing specifically on communicative AIs, Natale argues that what we call “AI” is not a form of intelligence but rather a reflection of the human user. Using the term “banal deception,” he reveals that deception forms the basis of all human-computer interactions rooted in AI technologies, as technologies like voice assistants utilize the dynamics of projection and stereotyping as a means for aligning with our existing habits and social conventions. By exploiting the human instinct to connect, AI reveals our collective vulnerabilities to deception, showing that what machines are primarily changing is not other technology but ourselves as humans.
Deceitful Media illustrates how AI has continued a tradition of technologies that mobilize our liability to deception and shows that only by better understanding our vulnerabilities to deception can we become more sophisticated consumers of interactive media.

2020 ◽  
Vol 17 (6) ◽  
pp. 76-91
Author(s):  
E. D. Solozhentsev

The scientific problem of economics “Managing the quality of human life” is formulated on the basis of artificial intelligence, algebra of logic and logical-probabilistic calculus. Managing the quality of human life is represented by managing the processes of his treatment, training and decision making. Events in these processes and the corresponding logical variables relate to the behavior of a person, other persons and infrastructure. The processes of the quality of human life are modeled, analyzed and managed with the participation of the person himself. Scenarios and structural, logical and probabilistic models of managing the quality of human life are given. Special software for quality management is described. The relationship of human quality of life and the digital economy is examined. We consider the role of public opinion in the management of the “bottom” based on the synthesis of many studies on the management of the economics and the state. The bottom management is also feedback from the top management.


2020 ◽  
Vol 22 (10) ◽  
pp. 25-28
Author(s):  
Abakumova I.V. ◽  
Grishina A.V. ◽  
Godunov M.V.

Modern psychology considers meaning regulation, as an integral mechanism of personal development. A system of personal meanings develops in the processes of under-standing reality. Due to their polymodality personal meanings cannot be good or bad, but they are not the same. When confronted with unknown situations, the unevenness of the emerging personal meanings can lead to match or mismatch with the existing system of mean-ings. Coincidence, as agreement with a new fact, means meaning consonance. Mismatch, as a mismatch between new and existing information, means meaning disso-nance, as a kind of cognitive dissonance. An analysis of modern psychological literature shows that there are two main plans for the action of meaning dissonances: the dissonance of individual meanings in the implementation of real interactions and the dissonance of common mean-ings during the translation of interpersonal meaning formations. It is proposed to consider that meaning ac-quires a personal coloring due to the processes of both consonance and dissonance positioning of meaning con-structs in the meaning sphere of the subject. The revealed dichotomy of the meaning formation processes shows the possibility of manifestation of meanings bipolarity, which is revealed in the process of transition from the internal to the external world and in collisions with oth-er meaning systems. Then it can be assumed that the ef-fect of meaning dissonance manifests itself in two ways: firstly, in terms of real interactions as a discord of indi-vidual meanings, and secondly, in terms of translation of interpersonal meaning constructs as a dissonance of common meanings. In the course of such an external for-mation, meaning becomes already a personal meaning in the consciousness of a particular person.


2021 ◽  
Vol 6 (22) ◽  
pp. 36-44
Author(s):  
Nor ‘Adha Ab Hamid ◽  
Azizah Mat Rashid ◽  
Mohd Farok Mat Nor

The development of science and technology is always ahead and has no point and seems limitless. Although human beings are the agents who started this development but eventually faced with a bitter situation which can sacrifice human moral, right and interest of our future. Shariah criminal offenses nowadays can not only occur or be witnessed by a person in a meeting physically with the perpetrator. As a result of technological developments, such behavior can occur and can be witnessed in general by larger groups. Although the illegal treatment which is not in accordance with sharia law and the moral crisis issues happening surrounding us and is rampant on social media, no enforcement is done on perpetrators who use social media medium. According to sharia principles, something that is wrong should be prevented and it is the responsibility of all Muslim individuals. But what is happening today, some Shariah criminal behavior, especially in relation to ethics, can occur easily using facilities technology driven by technological ingenuity. If the application of existing legal provisions is limited and has obstacles for enforcement purposes, then the problem needs to be overcome due to development the law should be in line with current developments. The study aims to identify a segment and cases of the moral crisis on social media and online using the artificial intelligence (AI) application and to identify the needs for shariah prevention. This thesis uses qualitative approaches, adopts library-based research, and, by content analysis of documents, applies the literature review approach. The findings show that the use of social media and AI technology has had an impact on various issues such as moral crisis, security, misuse, an intrusion of personal data, and the construction of AI beyond human control. Thus, the involvement and cooperation of various parties are needed in regulating and addressing issues that arise as a result of the use of social media and AI technology in human life.


2021 ◽  
Author(s):  
Norman Wirzba

In a time of climate change, environmental degradation, and social injustice, the question of the value and purpose of human life has become urgent. What are the grounds for hope in a wounded world? This Sacred Life gives a deep philosophical and religious articulation of humanity's identity and vocation by rooting people in a symbiotic, meshwork world that is saturated with sacred gifts. The benefits of artificial intelligence and genetic enhancement notwithstanding, Norman Wirzba shows how an account of humans as interdependent and vulnerable creatures orients people to be a creative, healing presence in a world punctuated by wounds. He argues that the commodification of places and creatures needs to be resisted so that all life can be cherished and celebrated. Humanity's fundamental vocation is to bear witness to God's love for creaturely life, and to commit to the construction of a hospitable and beautiful world.


Author(s):  
Joel O. Afolayan ◽  
Roseline O. Ogundokun ◽  
Abiola G. Afolabi ◽  
Adekanmi A. Adegun

Artificial intelligence (AI) is a broad and complex area of study, which can be difficult for non-specialists to understand. Yet, its ultimate promise is to create computer systems that manifest human intelligence. This chapter coins “Machinzation” for the application of literary machine (computer) to human operations. This clearly has major implications for library and information science profession. In principle and practice, AI has penetrated virtually all walks of human life. Many authors have previously provided in-depth overviews of AI technologies. Service is the vocal point of librarianship and particularly in the era where information is the fifth and most important factor of production. Cloud computing stems from the principle of AI while when applied into the operations and routines in libraries and information center gives a brand new concept “CloudLibrarianship.” The new concept is dealt with in this work. Emergence of this concept opens up the entrepreneur opportunities in the information sector of the economy-inforpreneurship. This chapter therefore examines certain key aspects of AI that determine its potential utility as a tool for enhancing and supporting library operations.


Author(s):  
Anand Parey ◽  
Amandeep Singh Ahuja

Gearboxes are employed in a wide variety of applications, ranging from small domestic appliances to the rather gigantic power plants and marine propulsion systems. Gearbox failure may not only result in significant financial losses resulting from downtime of machinery but may also place human life at risk. Gearbox failure in transmission systems of warships and single engine aircraft, beside other military applications, is unacceptable. The criticality of the gearbox in rotary machines has resulted in enormous effort on the part of researchers to develop new and efficient methods of diagnosing faults in gearboxes so that timely rectification can be undertaken before catastrophic failure occurs. Artificial intelligence (AI) has been a significant milestone in automated gearbox fault diagnosis (GFD). This chapter reviews over a decade of research efforts on fault diagnosis of gearboxes with AI techniques. Some of areas of AI in GFD which still merit attention have been identified and discussed at the end of the chapter.


2021 ◽  
pp. 127-132
Author(s):  
Simone Natale

The historical trajectory examined in this book demonstrates that humans’ reactions to machines that are programmed to simulate intelligent behaviors represent a constitutive element of what is commonly called AI. Artificial intelligence technologies are not just designed to interact with human users: they are designed to fit specific characteristics of the ways users perceive and navigate the external world. Communicative AI becomes more effective not only by evolving from a technical standpoint but also by profiting, through the dynamics of banal deception, from the social meanings humans project onto situations and things. In this conclusion, the risks and problems related to AI’s banal deception are explored in relationship with other AI-based technologies such as robotics and social media bots. A call is made for initiating a more serious debate about the role of deception in interface design and computer science. The book concludes with a reflection on the need to develop a critical and skeptical stance in interactions with computing technologies and AI. In order not to be found unprepared for the challenges posed by AI, computer scientists, software developers, designers as well as users have to consider and critically interrogate the potential outcomes of banal deception.


2022 ◽  
pp. 130-144

In this chapter, the author introduces the reader to the importance of virtual reality in human life, avatars, and communicating with digital characters and demonstrates the pervasiveness of technology's penetration into our lives, not only physically, cognitively, and emotionally, but also environmentally. As the created interpreters and representatives of scientific work as the substance subject of scientific history, avatars participate, along with robots, cyborgs, and artificial intelligence, in the desubjectivization, biological denaturalization, and despiritualization of man and death of biological life. The ‘cyborgization' of humans in virtual space extends the landscape of the discussion on cyborgoethics.


2020 ◽  
Vol 1 (3) ◽  
pp. 357-370 ◽  
Author(s):  
Jens Schröter

AbstractIn the call for the special issue for the EAEPE Journal, we can find the word “scenario.” The question is if the authors can imagine scenarios in which “potential strategies for the appropriation of existing capitalist infrastructures […] in order to provoke the emergence of post-capitalist infrastructures” can be described. Obviously, the call verges on the border of science fiction—and this is not a bad thing. Diverse strands of media studies and science and technology studies have shown (e.g., Schröter 2004; Kirby 2010; Jasanoff and Kim 2015; McNeil et al. 2017) that not only the development of science and (media) technology is deeply interwoven in social imaginaries about possible outcomes and their implicated futures, but there is a whole theoretical tradition in which societies as such are fundamentally constituted by imaginary relations (Castoriadis 1975/2005). But in all these discussions, one notion very seldom appears: that of an “imaginary economy,” meaning a collectively held system of more or less vague or detailed ideas, what an economy is, how it works, and how it should be (especially in the future; but see the somewhat different usage recently in Fabbri 2018). The aim of the paper is to outline a notion of “imaginary economy” and its necessary functions in the stabilization of a given economy, but even more so in the transformation to another economy—how should a transformation take place if there’s not at least a vague image where to go? Of course, we could also imagine a blind evolutionary process without any imaginary process but that seems not to be the way in which human societies—and economies—work. Obviously a gigantic research field opens up—so in the proposed paper, only one type of “imaginary economy” can be analyzed: It is the field that formed recently around the proposed usages and functions of 3D printing. In publications as diverse as Eversmann (2014) and Rifkin (2014), the 3D printer operates as a technology that seems to open up a post-capitalist future—and thereby it is directly connected to the highly imaginary “replicator” from Star Trek. In these scenarios, a localized omnipotent production—a post-scarcity scenario (see Panayotakis 2011)—overcomes by itself capitalism: But symptomatically enough, questions of work, environment, and planetary computation are (mostly) absent from these scenarios. Who owns the templates for producing goods with 3D printers? What about the energy supply? In a critical and symptomatic reading, this imaginary economy, very present in a plethora of discourses nowadays, is deconstructed and possible implications for a post-capitalist construction are discussed.


2016 ◽  
Vol 42 (4) ◽  
pp. 703-740 ◽  
Author(s):  
Steve G. Hoffman

Many research-intensive universities have moved into the business of promoting technology development that promises revenue, impact, and legitimacy. While the scholarship on academic capitalism has documented the general dynamics of this institutional shift, we know less about the ground-level challenges of research priority and scientific problem choice. This paper unites the practice tradition in science and technology studies with an organizational analysis of decision-making to compare how two university artificial intelligence labs manage ambiguities at the edge of scientific knowledge. One lab focuses on garnering funding through commercialization schemes, while the other is oriented to federal science agencies. The ethnographic comparison identifies the mechanisms through which an industry-oriented lab can be highly adventurous yet produce a research program that is thin and erratic due to a priority placed on commercialization. However, the comparison does not yield an implicit nostalgia for federalized science; it reveals the mechanisms through which agency-oriented labs can pursue a thick and consistent research portfolio but in a strikingly myopic fashion.


Sign in / Sign up

Export Citation Format

Share Document