Machine Ethics and the Idea of a More-Than-Human Moral World

2011 ◽  
pp. 115-137 ◽  
Author(s):  
Steve Torrance
Keyword(s):  
2011 ◽  
Author(s):  
James Riley Strange
Keyword(s):  

Author(s):  
Xavier Tubau

This chapter sets Erasmus’s ideas on morality and the responsibility of rulers with regard to war in their historical context, showing their coherence and consistency with the rest of his philosophy. First, there is an analysis of Erasmus’s criticisms of the moral and legal justifications of war at the time, which were based on the just war theory elaborated by canon lawyers. This is followed by an examination of his ideas about the moral order in which the ruler should be educated and political power be exercised, with the role of arbitration as the way to resolve conflicts between rulers. As these two closely related questions are developed, the chapter shows that the moral formation of rulers, grounded in Christ’s message and the virtue politics of fifteenth-century Italian humanism, is the keystone of the moral world order that Erasmus proposes for his contemporaries.


2011 ◽  
pp. 476-492 ◽  
Author(s):  
Susan Leigh Anderson ◽  
Michael Anderson

2021 ◽  
Author(s):  
◽  
Brendan Vize

<p>Consider Lt. Commander Data from Star Trek: The Next Generation, the droid C3PO from Star Wars, or the Replicants that appear in Bladerunner: They can use language (or many languages), they are rational, they form relationships, they use language that suggests that they have a concept of self, and even language that suggests that they have “feelings” or emotional experience. In the films and TV shows that they appear, they are depicted as having frequent social interaction with human beings; but would we have any moral obligations to such a being if they really existed? What would we be permitted to do or not to do to them? On the one hand, a robot like Data has many of the attributes that we currently associate with a person. On the other hand, he has many of the attributes of the machines that we currently use as tools. He (and other science-fiction machines like him) closely resembles one of the things we value the most (a person), and at the same time, one of the things we value the least (an artefact), leading to an apparent ethical paradox. What is its solution?</p>


2021 ◽  
Author(s):  
◽  
Brendan Vize

<p>Consider Lt. Commander Data from Star Trek: The Next Generation, the droid C3PO from Star Wars, or the Replicants that appear in Bladerunner: They can use language (or many languages), they are rational, they form relationships, they use language that suggests that they have a concept of self, and even language that suggests that they have “feelings” or emotional experience. In the films and TV shows that they appear, they are depicted as having frequent social interaction with human beings; but would we have any moral obligations to such a being if they really existed? What would we be permitted to do or not to do to them? On the one hand, a robot like Data has many of the attributes that we currently associate with a person. On the other hand, he has many of the attributes of the machines that we currently use as tools. He (and other science-fiction machines like him) closely resembles one of the things we value the most (a person), and at the same time, one of the things we value the least (an artefact), leading to an apparent ethical paradox. What is its solution?</p>


Sign in / Sign up

Export Citation Format

Share Document