robot ethics
Recently Published Documents


TOTAL DOCUMENTS

73
(FIVE YEARS 36)

H-INDEX

8
(FIVE YEARS 1)

2022 ◽  
Vol 12 (1) ◽  
pp. 1-40
Author(s):  
Michelle Karim ◽  
Christina Swart-Opperman ◽  
Geoff Bick

Learning outcomes The learning outcomes are follows: critically assess the impact of disruptive technologies, such as automation, on the organisation, its processes and employees; evaluate the structural changes required within the organisation to prepare for digital transformation; apply change models to the unique challenges associated with disruptive technologies; and recommend solutions for the organisation to proceed with the implementation of disruptive technologies, while keeping employees central to the change. Case overview/synopsis The Dimension Data automation case provides students and executives with a glimpse of the future that organisations and employees must prepare for. The case starts out with the protagonist and product owner of digital at Dimension Data, Andrew Harmse, reflecting on his three-year automation journey within the Automation Centre of Excellence. The world of automation is growing exponentially, and Andrew’s team will have to support the organisation as they scale up their automation journey and navigate the uncertain future of an increased, blended human-robot workforce. Individual employee reactions, positive and negative, will have to be balanced with the opportunities that ever-changing technology enables. The case focusses on the themes of digital transformation, digital disruption, change management and the very real factors to consider when faced with decision-making on automation as the world is constantly changing. The COVID-19 pandemic has forced organisations to relook processes and increase investment in technologies that enable digital client engagement and servicing, considering social distancing requirements. Automation at dimension data has been largely internally focussed, but there is a drive to increase delivery for clients. Andrew’s team will have to guide organisations through the journey and continuum of changes and uncertainties, such as large- scale unemployment and robot ethics. Complexity academic level The target audience for this teaching case are postgraduate and Master level students, specifically Master of Business Administration (MBA) students as well as Executive Education courses. Students who are responsible for making strategic decisions that impact the future of their organisations as well as students with an interest in the role of technology in the future will benefit from the case. Supplementary materials Teaching notes are available for educators only. Subject code CSS 6: Human Resource Management.


2021 ◽  
Vol 14 (2) ◽  
pp. 430-443
Author(s):  
Małgorzata Suchacka ◽  
Rafał Muster ◽  
Mariusz Wojewoda

The paper has a review character, and the presented analysis is based on theoretical considerations referring to the works of other authors. The aim of the paper is to draw attention to the importance of human creativity in the context of technology development, with special emphasis on artificial intelligence. For the purpose of exploration, the study applies philosophical methods, especially methods typical of ethical reflections, also supported by the analysis of existing data derived from social sciences, especially contemporary sociology. The study is synthetic in nature and includes theoretical considerations concerning several issues. Positive and creative possibilities of using artificial intelligence in social and economic life were shown. Potential threats that may be associated with the inappropriate use of artificial intelligence – robots and information systems were also identified. Potential threats resulting from too much trust of people in algorithms were shown. Attention is focused on social and ethical aspects of the human-machine relationship, with special emphasis on the dimension of pragmatism, trust and fascination with new technologies, as well as the principles of robot ethics. A significant part of the considerations also refers to the effects of automation processes, including the functioning of the labour market, human creative abilities and appropriate competences. The third part of the study indicates still undeveloped research fields related to artificial intelligence. The conducted analysis may indicate the direction for further sociological and philosophical research that considers the specificity of the artificial intelligence functioning and seeks support in interdisciplinary research teams.


2021 ◽  
Vol 11 (21) ◽  
pp. 10136
Author(s):  
Anouk van Maris ◽  
Nancy Zook ◽  
Sanja Dogramadzi ◽  
Matthew Studley ◽  
Alan Winfield ◽  
...  

This work explored the use of human–robot interaction research to investigate robot ethics. A longitudinal human–robot interaction study was conducted with self-reported healthy older adults to determine whether expression of artificial emotions by a social robot could result in emotional deception and emotional attachment. The findings from this study have highlighted that currently there appears to be no adequate tools, or the means, to determine the ethical impact and concerns ensuing from long-term interactions between social robots and older adults. This raises the question whether we should continue the fundamental development of social robots if we cannot determine their potential negative impact and whether we should shift our focus to the development of human–robot interaction assessment tools that provide more objective measures of ethical impact.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Priya Persaud ◽  
Aparna S. Varde ◽  
Weitian Wang

An autonomous household robot passed a self-awareness test in 2015, proving that the cognitive capabilities of robots are heading towards those of humans. While this is a milestone in AI, it raises questions about legal implications. If robots are progressively developing cognition, it is important to discuss whether they are entitled to justice pursuant to conventional notions of human rights. This paper offers a comprehensive discussion of this complex question through cross-disciplinary scholarly sources from computer science, ethics, and law. The computer science perspective dissects hardware and software of robots to unveil whether human behavior can be efficiently replicated. The ethics perspective utilizes insights from robot ethics scholars to help decide whether robots can act morally enough to be endowed with human rights. The legal perspective provides an in-depth discussion of human rights with an emphasis on eligibility. The article concludes with recommendations including open research issues.


Author(s):  
Tomomi Hashimoto ◽  
Xingyu Tao ◽  
Takuma Suzuki ◽  
Takafumi Kurose ◽  
Yoshio Nishikawa ◽  
...  

With the recent developments in robotics, the ability of robots to recognize their environment has significantly improved. However, the manner in which robots behave depending on a particular situation remains an unsolved problem. In this study, we propose a decision-making method for robots based on robot ethics. Specifically, we applied the two-level theory of utilitarianism, comprising SYSTEM 1 (intuitive level) for quick decisions and SYSTEM 2 (critical level) for slow but careful decisions. SYSTEM 1 represented a set of heuristically determined responses and SYSTEM 2 represented a rule-based discriminator. The decision-making method was as follows. First, SYSTEM 1 selected the response to the input. Next, SYSTEM 2 selected the rule that the robot’s behavior should follow depending on the amount of happiness and unhappiness of the human, robot, situation, and society. We assumed three choices for SYSTEM 2. We assigned “non-cooperation” to asocial comments, “cooperation” to when the amount of happiness was considered to be high beyond the status quo bias, and “withholding” to all other cases. In the case of choosing between cooperation or non-cooperation, we modified the behavior selected in SYSTEM 1. An impression evaluation experiment was conducted, and the effectiveness of the proposed method was demonstrated.


Author(s):  
Mark Coeckelbergh

AbstractDoes cruel behavior towards robots lead to vice, whereas kind behavior does not lead to virtue? This paper presents a critical response to Sparrow’s argument that there is an asymmetry in the way we (should) think about virtue and robots. It discusses how much we should praise virtue as opposed to vice, how virtue relates to practical knowledge and wisdom, how much illusion is needed for it to be a barrier to virtue, the relation between virtue and consequences, the moral relevance of the reality requirement and the different ways one can deal with it, the risk of anthropocentric bias in this discussion, and the underlying epistemological assumptions and political questions. This response is not only relevant to Sparrow’s argument or to robot ethics but also touches upon central issues in virtue ethics.


Author(s):  
Jon-Arild Johannessen
Keyword(s):  

2021 ◽  
Vol 29 ◽  
Author(s):  
Coetzee Bester ◽  
Rachel Fischer

This article rethinks the position of Information Ethics (IE) vis-à-vis the growing discipline of the ethics of AI. While IE has a long and respected academic history, the discipline of the ethics of AI is much younger. The scope of the latter discipline has exploded in the last decade in sync with the explosion of data driven AI. Currently, the ethics of AI as a discipline can be said to have sub-divided at least into machine ethics, robot ethics, data ethics, and neuro ethics. The argument presented here is that ethics of AI can from one perspective be viewed as a sub-discipline of IE. IE is at the heart of ethical concerns about the potential de-humanising impact of AI technologies, as it addresses issues relating to communication, the status of knowledge claims, and the quality of media-generated information, among many others. Perhaps the single most concerning ethical concern in the context of data-driven AI technology is the rise of new social narratives that threaten humans’ special sense of agency and, and this is firstly an IE concern. The article thus argues for the independent position of IE as well as for its position as the core, over-arching discipline, of the ethics of AI.


Sign in / Sign up

Export Citation Format

Share Document