Artificial Agents in Natural Moral Communities: A Brief Clarification

2021 ◽  
Vol 30 (3) ◽  
pp. 455-458
Author(s):  
Daniel W. Tigard

AbstractWhat exactly is it that makes one morally responsible? Is it a set of facts which can be objectively discerned, or is it something more subjective, a reaction to the agent or context-sensitive interaction? This debate gets raised anew when we encounter newfound examples of potentially marginal agency. Accordingly, the emergence of artificial intelligence (AI) and the idea of “novel beings” represent exciting opportunities to revisit inquiries into the nature of moral responsibility. This paper expands upon my article “Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible” and clarifies my reliance upon two competing views of responsibility. Although AI and novel beings are not close enough to us in kind to be considered candidates for the same sorts of responsibility we ascribe to our fellow human beings, contemporary theories show us the priority and adaptability of our moral attitudes and practices. This allows us to take seriously the social ontology of relationships that tie us together. In other words, moral responsibility is to be found primarily in the natural moral community, even if we admit that those communities now contain artificial agents.

2021 ◽  
pp. 161-164
Author(s):  
Eric A. Posner

Many people are worried about the fragmentation of labor markets, as firms replace employees with independent contractors. Another common worry is that low-skill work, and ultimately nearly all forms of work, will be replaced by robots as artificial intelligence advances. Labor market fragmentation is not a new phenomenon and can be addressed with stronger classification laws supplemented by antitrust enforcement. In fact, the gig economy has many attractive elements, and there is no reason to fear it as long as existing laws are enforced. Over the long run, artificial intelligence may replace much of the work currently performed by human beings. If it does, the appropriate response is not antitrust or employment regulation but policy that ensures the social surplus is fairly divided.


2020 ◽  
Vol 63 (2) ◽  
pp. 83-103
Author(s):  
Elena G. Grebenshchikova ◽  
Pavel D. Tishchenko

The article discusses the challenges, benefits, and risks that, from a bioethical perspective, arise because of the the development of eHealth projects. The conceptual framework of the research is based on H. Jonas’ principles of the ethics of responsibility and B.G. Yudin’s anthropological ideas on human beings as agents who constantly change their own boundaries in the “zone of phase transitions.” The article focuses on the events taking place in the zone of phase transitions between humans and machines in eHealth. It is shown that for innovative practices related to digitalization and datafication in medicine, it is needed to rethink central bioethical concepts of personal autonomy and informed consent. In particular, the concept of broad or open informed consent is discussed, which allows the idea of moral responsibility in the field of biomedical technologies to be extended to events of uncertain future. The authors draw attention to the problems associated with the emergence of new autonomous subjects/agents (machines with artificial intelligence) in relationship between doctors and patients. The humanization of machines occurring in eHealth is accompanied by a counter trend – the formation of conceptions and practices of the quantified self. There emerges the practices of self-care and bio-power (M. Foucault) caused by the datafication and digitization of personality. The authors conclude that bioethics should proactively develop norms for the evolving interaction between doctor and patients.


2018 ◽  
Vol 14 (3) ◽  
pp. 519-530 ◽  
Author(s):  
Vlad Petre Glăveanu

In this editorial I introduce the possible as an emerging field of inquiry in psychology and related disciplines. Over the past decades, significant advances have been made in connected areas – counterfactual thinking, anticipation, prospection, imagination and creativity, etc. – and several calls have been formulated in the social sciences to study human beings and societies as systems that are open to possibility and to the future. However, engaging with the possible, in the sense of both becoming aware of it and actively exploring it, represents a subject in need of further theoretical elaboration. In this paper, I review several existing approaches to the possible before briefly outlining a new, sociocultural account. While the former are focused on cognitive processes and uphold the old dichotomy between the possible and the actual or real, the latter grows out of a social ontology grounded in notions of difference, positions, perspectives, reflexivity, and dialogue. In the end, I argue that a better understanding of the possible can help us cultivate it in both mind and society.


2017 ◽  
Vol 73 (3) ◽  
Author(s):  
Abraham K. Akih ◽  
Yolanda Dreyer

Penal reform is a challenge across the world. In Africa, those who are incarcerated are especially vulnerable and often deprived of basic human rights. Prison conditions are generally dire, resources are limited, and at times undue force is used to control inmates. The public attitude towards offenders is also not encouraging. Reform efforts include finding alternative ways of sentencing such as community service, making use of halfway houses and reducing sentences. These efforts have not yet yielded the desired results. The four principles of retribution, deterrence, incapacitation and rehabilitation guide penal practice in Africa. Retribution and rehabilitation stand in tension. Deterrence and incapacitation aim at forcing inmates to conform to the social order. The article argues that prison chaplaincy can make a valuable contribution to restoring the dignity and humanity of those who are incarcerated. Chaplaincy can contribute to improving attitudes and practices in the penal system and society. In addition to the social objective of rehabilitation, prison ministry can, on a spiritual level, also facilitate repentance, forgiveness and reconciliation. The aim is the holistic restoration of human beings.


2021 ◽  
pp. 11-25
Author(s):  
Daniel W. Tigard

AbstractTechnological innovations in healthcare, perhaps now more than ever, are posing decisive opportunities for improvements in diagnostics, treatment, and overall quality of life. The use of artificial intelligence and big data processing, in particular, stands to revolutionize healthcare systems as we once knew them. But what effect do these technologies have on human agency and moral responsibility in healthcare? How can patients, practitioners, and the general public best respond to potential obscurities in responsibility? In this paper, I investigate the social and ethical challenges arising with newfound medical technologies, specifically the ways in which artificially intelligent systems may be threatening moral responsibility in the delivery of healthcare. I argue that if our ability to locate responsibility becomes threatened, we are left with a difficult choice of trade-offs. In short, it might seem that we should exercise extreme caution or even restraint in our use of state-of-the-art systems, but thereby lose out on such benefits as improved quality of care. Alternatively, we could embrace novel healthcare technologies but in doing so we might need to loosen our commitment to locating moral responsibility when patients come to harm; for even if harms are fewer – say, as a result of data-driven diagnostics – it may be unclear who or what is responsible when things go wrong. What is clear, at least, is that the shift toward artificial intelligence and big data calls for significant revisions in expectations on how, if at all, we might locate notions of responsibility in emerging models of healthcare.


2020 ◽  
Vol 63 (2) ◽  
pp. 83-103
Author(s):  
Elena G. Grebenshchikova ◽  
Pavel D. Tishchenko

The article discusses the challenges, benefits, and risks that, from a bioethical perspective, arise because of the the development of eHealth projects. The conceptual framework of the research is based on H. Jonas’ principles of the ethics of responsibility and B.G. Yudin’s anthropological ideas on human beings as agents who constantly change their own boundaries in the “zone of phase transitions.” The article focuses on the events taking place in the zone of phase transitions between humans and machines in eHealth. It is shown that for innovative practices related to digitalization and datafication in medicine, it is needed to rethink central bioethical concepts of personal autonomy and informed consent. In particular, the concept of broad or open informed consent is discussed, which allows the idea of moral responsibility in the field of biomedical technologies to be extended to events of uncertain future. The authors draw attention to the problems associated with the emergence of new autonomous subjects/agents (machines with artificial intelligence) in relationship between doctors and patients. The humanization of machines occurring in eHealth is accompanied by a counter trend – the formation of conceptions and practices of the quantified self. There emerges the practices of self-care and bio-power (M. Foucault) caused by the datafication and digitization of personality. The authors conclude that bioethics should proactively develop norms for the evolving interaction between doctor and patients.


2020 ◽  
Vol 42 (7-8) ◽  
pp. 1410-1426 ◽  
Author(s):  
Andreas Hepp

The aim of this article is to outline ‘communicative robots’ as an increasingly relevant field of media and communication research. Communicative robots are defined as autonomously operating systems designed for the purpose of quasi-communication with human beings to enable further algorithmic-based functionalities – often but not always on the basis of artificial intelligence. Examples of these communicative robots can be seen in the now familiar artificial companions such as Apple’s Siri or Amazon’s Alexa, the social bots present on social media platforms or work bots that automatically generate journalistic content. In all, the article proceeds in three steps. Initially, it takes a closer look at the three examples of artificial companions, social bots and work bots in order to accurately describe the phenomenon and their recent insinuation into everyday life. This will then allow me to grasp the challenges posed by the increasing need to deal with communicative robots in media and communication research. It is from this juncture from where I would like to draw back on the discussion about the automation of communication and clearly outline how communicative robots are more likely than physical artefacts to be experienced at the interface of automated communication and communicative automation.


2021 ◽  
pp. 69-83
Author(s):  
Salvatore Parente

In order to tax the facts emerging from the computer economy, it is necessary not only to verify whether artificial intelligence is endowed with an autonomous tax subjectivity, but also to ascertain its compatibility with the principle of ability to pay, the basis and limit of taxation. This twofold requirement applies in particular to machines with cognitive skills similar to those of human beings, capable of taking decisions independently and increasing their knowledge. In any event, it must be the case that the subjective suitability of the machine for assuming the tax obligation can be inferred. Within these limits, the provision of a robot tax that does not alter the structure of the tax system could privilege the compensatory view of the social damage caused by technological innovation, in order to take into account the negative externalities related to the automation of production processes in terms of employment and financing of public expenditure. The taxation of robotics would thus affect the production of technological companies, due to the negative externalities resulting from the adoption of automated processes, since these are activities that pursue economic growth objectives.


AI and Ethics ◽  
2021 ◽  
Author(s):  
Anna Strasser

AbstractArtificial agents have become increasingly prevalent in human social life. In light of the diversity of new human–machine interactions, we face renewed questions about the distribution of moral responsibility. Besides positions denying the mere possibility of attributing moral responsibility to artificial systems, recent approaches discuss the circumstances under which artificial agents may qualify as moral agents. This paper revisits the discussion of how responsibility might be distributed between artificial agents and human interaction partners (including producers of artificial agents) and raises the question of whether attributions of responsibility should remain entirely on the human side. While acknowledging a crucial difference between living human beings and artificial systems culminating in an asymmetric feature of human–machine interactions, this paper investigates the extent to which artificial agents may reasonably be attributed a share of moral responsibility. To elaborate on criteria that can justify a distribution of responsibility in certain human–machine interactions, the role of types of criteria (interaction-related criteria and criteria that can be deferred from socially constructed responsibility relationships) is examined. Thereby, the focus will lay on the evaluation of potential criteria referring to the fact that artificial agents surpass in some aspects the capacities of humans. This is contrasted with socially constructed responsibility relationships that do not take these criteria into account. In summary, situations are examined in which it seems plausible that moral responsibility can be distributed between artificial and human agents.


2020 ◽  
Vol 10 (1-2) ◽  
pp. 59-68
Author(s):  
Peter Takáč

AbstractLookism is a term used to describe discrimination based on the physical appearance of a person. We suppose that the social impact of lookism is a philosophical issue, because, from this perspective, attractive people have an advantage over others. The first line of our argumentation involves the issue of lookism as a global ethical and aesthetical phenomenon. A person’s attractiveness has a significant impact on the social and public status of this individual. The common view in society is that it is good to be more attractive and healthier. This concept generates several ethical questions about human aesthetical identity, health, authenticity, and integrity in society. It seems that this unequal treatment causes discrimination, diminishes self-confidence, and lowers the chance of a job or social enforcement for many human beings. Currently, aesthetic improvements are being made through plastic surgery. There is no place on the human body that we cannot improve with plastic surgery or aesthetic medicine. We should not forget that it may result in the problem of elitism, in dividing people into primary and secondary categories. The second line of our argumentation involves a particular case of lookism: Melanie Gaydos. A woman that is considered to be a model with a unique look.


Sign in / Sign up

Export Citation Format

Share Document