If A.I. Only Had a Heart: Why Artificial Intelligence Research Needs to Take Emotions More Seriously

Author(s):  
Garret Merriam

Artificial Emotional Intelligence research has focused on emotions in a limited “black box” sense, concerned only with emotions as ‘inputs/outputs’ for the system, disregarding the processes and structures that constitute the emotion itself. We’re teaching machines to act as if they can feel emotions without the capacity to actually feel emotions. Serous moral and social problems will arise if we stick with the black box approach. As A.I.’s become more integrated with our lives, humans will require more than mere emulation of emotion; we’ll need them to have ‘the real thing.’ Moral psychology suggests emotions are necessary for moral reasoning and moral behavior. Socially, the role of ‘affective computing’ foreshadows the intimate ways humans will expect emotional reciprocity from their machines. Three objections are considered and responded to: (1) it’s not possible, (2) not necessary, and (3) too dangerous to give machines genuine emotions.

ICGA Journal ◽  
1991 ◽  
Vol 14 (3) ◽  
pp. 153-161 ◽  
Author(s):  
Robert Levinson ◽  
Feng-hsiung Hsu ◽  
Jonathan Schaeffer ◽  
T. Anthony Marsland ◽  
David E. Wilkins

2020 ◽  
Vol 12 (12) ◽  
pp. 226
Author(s):  
Laith T. Khrais

The advent and incorporation of technology in businesses have reformed operations across industries. Notably, major technical shifts in e-commerce aim to influence customer behavior in favor of some products and brands. Artificial intelligence (AI) comes on board as an essential innovative tool for personalization and customizing products to meet specific demands. This research finds that, despite the contribution of AI systems in e-commerce, its ethical soundness is a contentious issue, especially regarding the concept of explainability. The study adopted the use of word cloud analysis, voyance analysis, and concordance analysis to gain a detailed understanding of the idea of explainability as has been utilized by researchers in the context of AI. Motivated by a corpus analysis, this research lays the groundwork for a uniform front, thus contributing to a scientific breakthrough that seeks to formulate Explainable Artificial Intelligence (XAI) models. XAI is a machine learning field that inspects and tries to understand the models and steps involved in how the black box decisions of AI systems are made; it provides insights into the decision points, variables, and data used to make a recommendation. This study suggested that, to deploy explainable XAI systems, ML models should be improved, making them interpretable and comprehensible.


2020 ◽  
Vol 7 (2) ◽  
pp. 205395172093670
Author(s):  
Nicole Dewandre

In The Black Box Society, Frank Pasquale develops a critique of asymmetrical power: corporations’ secrecy is highly valued by legal orders, but persons’ privacy is continually invaded by these corporations. This response proceeds in three stages. I first highlight important contributions of The Black Box Society to our understanding of political and legal relationships between persons and corporations. I then critique a key metaphor in the book (the one-way mirror, Pasquale’s image of asymmetrical surveillance), and the role of transparency and ‘watchdogging’ in its primary policy prescriptions. I then propose ‘relational selfhood’ as an important new way of theorizing interdependence in an era of artificial intelligence and Big Data, and promoting optimal policies in these spheres.


2020 ◽  
Vol 28 ◽  
Author(s):  
Katrina Ingram

Artificial Intelligence (AI) is playing an increasingly prevalent role in our lives. Whether its landing a job interview, getting a bank loan or accessing a government program, organizations are using automated systems informed by AI enabled technologies in ways that have significant consequences for people. At the same time, there is a lack of transparency around how AI technologies work and whether they are ethical, fair or accurate. This paper examines a body of literature related to the ethical considerations surrounding the use of artificial intelligence and the role of ethical codes. It identifies and explores core issues including bias, fairness and transparency and looks at who is setting the agenda for AI ethics in Canada and globally. Lastly, it offers some suggestions for next steps towards a more inclusive discussion.


Author(s):  
Tim Smithers ◽  
Wade Troxell

A methodology for studying and understanding the process of design, and ultimately for developing a computational theory of design is presented. In particular, the role of formalization in such an investigation is set out. This is done by first presenting the background to and development ofcomputational searchas a widely adopted problem solving paradigm in artificial intelligence research. It is then suggested why computational search provides an inadequate characterization of the design process and an alternative, that design is an exploration process is proposed. By developing certain ideas first put forward by Simon the authors seek to explain why this view is taken and how it forms a central part of their Artificial Intelligence in Design research programme. It is hoped to (eventually) develop a computational theory of design. The radically incomplete nature of this work necessarily prevents the authors from answering the question posed by the title of the paper but the title does provide a good focus for their efforts.


AI & Society ◽  
2020 ◽  
Vol 35 (4) ◽  
pp. 917-926 ◽  
Author(s):  
Karl de Fine Licht ◽  
Jenny de Fine Licht

Abstract The increasing use of Artificial Intelligence (AI) for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on how transparency can secure high-quality, fair, and reliable decisions, far less attention has been devoted to the role of transparency when it comes to how the general public come to perceive AI decision-making as legitimate and worthy of acceptance. Since relying on coercion is not only normatively problematic but also costly and highly inefficient, perceived legitimacy is fundamental to the democratic system. This paper discusses how transparency in and about AI decision-making can affect the public’s perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. We argue that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring.


2019 ◽  
Vol 2 (1) ◽  
pp. 190-205
Author(s):  
Istvan S. N. Berkeley

AbstractConnectionist research first emerged in the 1940s. The first phase of connectionism attracted a certain amount of media attention, but scant philosophical interest. The phase came to an abrupt halt, due to the efforts of Minsky and Papert (1969), when they argued for the intrinsic limitations of the approach. In the mid-1980s connectionism saw a resurgence. This marked the beginning of the second phase of connectionist research. This phase did attract considerable philosophical attention. It was of philosophical interest, as it offered a way of counteracting the conceptual ties to the philosophical traditions of atomism, rationalism, logic, nativism, rule realism and a concern with the role symbols play in human cognitive functioning, which was prevalent as a consequence of artificial intelligence research. The surge in philosophical interest waned, possibly in part due to the efforts of some traditionalists and the so-called black box problem. Most recently, what may be thought of as a third phase of connectionist research, based on so-called deep learning methods, is beginning to show some signs of again exciting philosophical interest.


Sign in / Sign up

Export Citation Format

Share Document