scholarly journals IT-ethical issues in sci-fi film within the timeline of the Ethicomp conference series

2015 ◽  
Vol 13 (3/4) ◽  
pp. 314-325
Author(s):  
Anne Gerdes

Purpose – This paper aims to explore human technology relations through the lens of sci-fi movies within the life cycle of the ETHICOMP conference series. Here, different perspectives on artificial intelligent agents, primarily in the shape of robots, but also including other kinds of intelligent systems, are explored. Hence, IT-ethical issues related to humans interactions with social robots and artificial intelligent agents are illustrated with reference to: Alex Proyas’ I, Robot; James Cameron’s Terminator; and the Wachowski brothers’ Matrix. All three movies present robots cast in the roles of moral agents capable of doing good or evil. Steven Spielberg’s Artificial Intelligence, A.I. gives rise to a discussion of the robot seen as a moral patient and furthermore reflects on possibilities for care and trust relations between robots and humans. Andrew Stanton’s Wall-E shapes a discussion of robots as altruistic machines in the role as facilitators of a flourishing society. Steven Spielberg’s Minority Report allows for a discussion of knowledge-discovering technology and the possibility for balancing data utility and data privacy. Design/methodology/approach – Observations of themes in sci-fi movies within the life span of the ETHICOMP conference series are discussed with the purpose of illustrating ways in which science fiction reflects (science) faction. In that sense, science fiction does not express our worries for a distant future, but rather casts light over questions, which is of concern in the present time. Findings – Human technology interactions are addressed and it is shown how sci-fi films highlight philosophical questions that puzzle us today, such as which kind of relationships can and ought to be formed with robots, and whether the roles they play as social actors demand that one ought to assign moral standing to them. The paper does not present firm answers but instead pays attention to the selection and framing of questions that deserve attention. Originality/value – To relate sci-fi movies to topics raised during the past 20 years of the ETHICOMP conference series, seemed to be an appropriate way of celebrating the 20-year anniversary of the ETHICOMP conference series.

2015 ◽  
Vol 13 (3/4) ◽  
pp. 166-175 ◽  
Author(s):  
Bernd Carsten Stahl ◽  
Charles M Ess

Purpose – The purpose of this paper is to give an introduction to the special issue by providing background on the ETHICOMP conference series and a discussion of its role in the academic debate on ethics and computing. It provides the context that influenced the launch of the conference series and highlights its unique features. Finally, it provides an overview of the papers in the special issues. Design/methodology/approach – The paper combines an historical account of ETHICOMP and a review of the existing papers. Findings – ETHICOMP is one of the well-established conference series (alongside IACAP and CEPE) focused on ethical issues of information and computing. Its special features include: multidisciplinary and diversity of contributors and contributions; explicit outreach to professionals whose work is to design, build, deploy and maintain specific computing applications in the world at large; creation of knowledge that is accessible and relevant across fields and disciplines; intention of making a practical difference to development, use and policy of computing principles and artefacts; and creation of an inclusive, supportive and nurturing community across traditional knowledge silos. Originality/value – The paper is the first one to explicitly define the nature of ETHICOMP which is an important building block in the future development of the conference series and will contribute to the further self-definition of the ETHICOMP community.


Author(s):  
Laura Pana

We discuss the thesis that the implementation of a moral code in the behaviour of artificial intelligent systems needs a specific form of human and artificial intelligence, not just an abstract intelligence. We present intelligence as a system with an internal structure and the structural levels of the moral system, as well as certain characteristics of artificial intelligent agents which can/must be treated as 1- individual entities (with a complex, specialized, autonomous or selfdetermined, even unpredictable conduct), 2- entities endowed with diverse or even multiple intelligence forms, like moral intelligence, 3- open and, even, free-conduct performing systems (with specific, flexible and heuristic mechanisms and procedures of decision), 4 – systems which are open to education, not just to instruction, 5- entities with “lifegraphy”, not just “stategraphy”, 6- equipped not just with automatisms but with beliefs (cognitive and affective complexes), 7- capable even of reflection (“moral life” is a form of spiritual, not just of conscious activity), 8 – elements/members of some real (corporal or virtual) community, 9 – cultural beings: free conduct gives cultural value to the action of a ”natural” or artificial being. Implementation of such characteristics does not necessarily suppose efforts to design, construct and educate machines like human beings. The human moral code is irremediably imperfect: it is a morality of preference, of accountability (not of responsibility) and a morality of non-liberty, which cannot be remedied by the invention of ethical systems, by the circulation of ideal values and by ethical (even computing) education. But such an imperfect morality needs perfect instruments for its implementation: applications of special logic fields; efficient psychological (theoretical and technical) attainments to endow the machine not just with intelligence, but with conscience and even spirit; comprehensive technical means for supplementing the objective decision with a subjective one. Machine ethics can/will be of the highest quality because it will be derived from the sciences, modelled by techniques and accomplished by technologies. If our theoretical hypothesis about a specific moral intelligence, necessary for the implementation of an artificial moral conduct, is correct, then some theoretical and technical issues appear, but the following working hypotheses are possible: structural, functional and behavioural. The future of human and/or artificial morality is to be anticipated.


2014 ◽  
Vol 12 (4) ◽  
pp. 298-313 ◽  
Author(s):  
Valerie Steeves ◽  
Priscilla Regan

Purpose – The purpose of this paper is to develop a conceptual framework to contextualize young people’s lived experiences of privacy and invasion online. Social negotiations in the construction of privacy boundaries are theorized to be dependent on individual preferences, abilities and context-dependent social meanings. Design/methodology/approach – Empirical findings of three related Ottawa-based studies dealing with young people’s online privacy are used to examine the benefits of online publicity, what online privacy means to young people and the social importance of privacy. Earlier philosophical discussions of privacy and identity, as well as current scholarship, are drawn on to suggest that privacy is an inherently social practice that enables social actors to navigate the boundary between self/other and between being closed/open to social interaction. Findings – Four understandings of privacy’s value are developed in concordance with recent privacy literature and our own empirical data: privacy as contextual, relational, performative and dialectical. Social implications – A more holistic approach is necessary to understand young people’s privacy negotiations. Adopting such an approach can help re-establish an ability to address the ways in which privacy boundaries are negotiated and to challenge surveillance schemes and their social consequences. Originality/value – Findings imply that privacy policy should focus on creating conditions that support negotiations that are transparent and equitable. Additionally, policy-makers must begin to critically evaluate the ways in which surveillance interferes with the developmental need of young people to build relationships of trust with each other and also with adults.


Author(s):  
Laura Pana

We discuss the thesis that the implementation of a moral code in the behaviour of artificial intelligent systems needs a specific form of human and artificial intelligence, not just an abstract intelligence. We present intelligence as a system with an internal structure and the structural levels of the moral system, as well as certain characteristics of artificial intelligent agents which can/must be treated as 1- individual entities (with a complex, specialized, autonomous or selfdetermined, even unpredictable conduct), 2- entities endowed with diverse or even multiple intelligence forms, like moral intelligence, 3- open and, even, free-conduct performing systems (with specific, flexible and heuristic mechanisms and procedures of decision), 4 – systems which are open to education, not just to instruction, 5- entities with “lifegraphy”, not just “stategraphy”, 6- equipped not just with automatisms but with beliefs (cognitive and affective complexes), 7- capable even of reflection (“moral life” is a form of spiritual, not just of conscious activity), 8 – elements/members of some real (corporal or virtual) community, 9 – cultural beings: free conduct gives cultural value to the action of a ”natural” or artificial being. Implementation of such characteristics does not necessarily suppose efforts to design, construct and educate machines like human beings. The human moral code is irremediably imperfect: it is a morality of preference, of accountability (not of responsibility) and a morality of non-liberty, which cannot be remedied by the invention of ethical systems, by the circulation of ideal values and by ethical (even computing) education. But such an imperfect morality needs perfect instruments for its implementation: applications of special logic fields; efficient psychological (theoretical and technical) attainments to endow the machine not just with intelligence, but with conscience and even spirit; comprehensive technical means for supplementing the objective decision with a subjective one. Machine ethics can/will be of the highest quality because it will be derived from the sciences, modelled by techniques and accomplished by technologies. If our theoretical hypothesis about a specific moral intelligence, necessary for the implementation of an artificial moral conduct, is correct, then some theoretical and technical issues appear, but the following working hypotheses are possible: structural, functional and behavioural. The future of human and/or artificial morality is to be anticipated.


2019 ◽  
Vol 10 (2) ◽  
pp. 208-229
Author(s):  
Sanjay Kumar Behera ◽  
Dayal R. Parhi ◽  
Harish C. Das

Purpose With the development of research toward damage detection in structural elements, the use of artificial intelligent methods for crack detection plays a vital role in solving the crack-related problems. The purpose of this paper is to establish a methodology that can detect and analyze crack development in a beam structure subjected to transverse free vibration. Design/methodology/approach Hybrid intelligent systems have acquired their own distinction as a potential problem-solving methodology adopted by researchers and scientists. It can be applied in many areas like science, technology, business and commerce. There have been the efforts by researchers in the recent past to combine the individual artificial intelligent techniques in parallel to generate optimal solutions for the problems. So it is an innovative effort to develop a strong computationally intelligent hybrid system based on different combinations of available artificial intelligence (AI) techniques. Findings In the present research, an integration of different AI techniques has been tested for accuracy. Theoretical, numerical and experimental investigations have been carried out using a fix-hinge aluminum beam of specified dimension in the presence and absence of cracks. The paper also gives an insight into the comparison of relative crack locations and crack depths obtained from numerical and experimental results with that of the results of the hybrid intelligent model and found to be in good agreement. Originality/value The paper covers the work to verify the accuracy of hybrid controllers in a fix-hinge beam which is very rare to find in the available literature. To overcome the limitations of standalone AI techniques, a hybrid methodology has been adopted. The output results for crack location and crack depth have been compared with experimental results, and the deviation of results is found to be within the satisfactory limit.


E-Management ◽  
2021 ◽  
Vol 4 (1) ◽  
pp. 20-28
Author(s):  
A. S. Lobacheva ◽  
O. V. Sobol

The article reveals the main ethical problems and contradictions associated with the use of artificial intelligence. The paper reveals the concept of “artificial intelligence”. The authors analyse two areas of ethical problems of artificial intelligence: fundamental ideas about the ethics of artificial intelligent systems and the creation of ethical norms.The paper investigates the work of world organizations on the development of ethical standards for the use of artificial intelligence: the Institute of Electrical and Electronics Engineers and UNESCO. The study analyses the main difficulties in the implementation of artificial intelligent systems: the attitude of employees to the use of robots in production activities and the automation of processes that affect their work functions and work organization; ethical issues related to retraining and re-certification of employees in connection with the introduction of new software products and robots; ethical issues in reducing staff as a result of the introduction of artificial intelligence and automation of production and business processes; ethical problems of the processing of personal data of employees, including assessments of their psychological and physical condition, personal qualities and character traits, values  and beliefs by specialized programs based on artificial intelligence, as well as tracking the work of employees; ethical contradictions when using special devices and tracking technologies in robotic technology and modern software products, which also extend to the employees interacting with them.


Author(s):  
Adrian David Cheok ◽  
Kasun Karunanayaka ◽  
Emma Yann Zhang

Intimate relationships, such as love and sex, between human and machines, especially robots, has been one of the themes of science fiction. However, this topic has never been treated in the academic area until recently. It was first raised and discussed by David Levy in his book Love and Sex with Robotics (2007). Since then, researchers have come up with many implementations of robot companions, like sex robots, emotional robots, humanoid robots, and artificial intelligent systems that can simulate human emotions. This chapter presents a summary of significant recent activity in this field, predicts how the field is likely to develop, and discusses ethical and legal issues. We also discuss our research in physical devices for human–robot love and sex communication.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Colin Williams ◽  
Gamze Oz-Yalaman

PurposeThe dominant theorisation of the informal economy views participants as rational economic actors operating in the informal economy when the expected benefits exceed the perceived costs of being caught and punished. Recently, an alternative theory has emerged which views participants as social actors operating in the informal economy due to their lack of vertical trust (in governments) and horizontal trust (in others). The aim of this paper is to evaluate these competing theorisations.Design/methodology/approachTo do so, data are reported from special Eurobarometer surveys conducted in 2007, 2013 and 2019 in eight West European countries (Austria, Belgium, France, Germany, Ireland, Luxembourg, the Netherlands and the United Kingdom).FindingsUsing probit regression analysis, the finding is that increasing the expected likelihood of being caught and level of punishment had a weak significant impact on the likelihood of participating in the informal economy in 2007, and there was no significant impact in 2013 and 2019. However, greater vertical and horizontal trust is significantly associated with a lower level of participation in the informal economy in all three time periods.Practical implicationsThe outcome is a call for a policy to shift away from increasing the expected level of punishment and likelihood of being caught, and towards improving vertical and horizontal trust. How this can be achieved is explored.Originality/valueEvidence is provided in a Western European context to support a shift away from a rational economic actor to a social actor approach when explaining and tackling the informal economy.


Author(s):  
Kathryn Strong Hansen

AbstractGreater emphasis on ethical issues is needed in science, technology, engineering, and mathematics (STEM) education. The fiction for specific purposes (FSP) approach, using optimistic science fiction texts, offers a way to focus on ethical reflection that capitalizes on role models rather than negative examples. This article discusses the benefits of using FSP in STEM education more broadly, and then explains how using optimistic fictions in particular encourages students to think in ethically constructive ways. Using examples of science fiction texts with hopeful perspectives, example discussion questions are given to model how to help keep students focused on the ethical issues in a text. Sample writing prompts to elicit ethical reflection are also provided as models of how to guide students to contemplate and analyze ethical issues that are important in their field of study. The article concludes that the use of optimistic fictions, framed through the lens of professional ethics guidelines and reinforced through ethical reflection, can help students to have beneficial ethical models.


2015 ◽  
Vol 5 (2) ◽  
pp. 194-205 ◽  
Author(s):  
Scarlat Emil ◽  
Virginia Mărăcine

Purpose – The purpose of this paper is to discuss how tacit and explicit knowledge determine grey knowledge and how these are stimulated through interactions within networks, forming the grey hybrid intelligent systems (HISs). The feedback processes and mechanisms between internal and external knowledge determine the apparition of grey knowledge into an intelligent system (IS). The extension of ISs is determined by the ubiquity of the internet but, in our framework, the grey knowledge flows assure the viability and effectiveness of these systems. Design/methodology/approach – Some characteristics of the Hybrid Intelligent Knowledge Systems are put forward along with a series of models of hybrid computational intelligence architectures. More, relevant examples from the literature related to the hybrid systems architectures are presented, underlying their main advantages and disadvantages. Findings – Due to the lack of a common framework it remains often difficult to compare the various HISs conceptually and evaluate their performance comparatively. Different applications in different areas are needed for establishing the best combinations between models that are designed using grey, fuzzy, neural network, genetic, evolutionist and other methods. But all these systems are knowledge dependent, the main flow that is used in all parts of every kind of system being the knowledge. Grey knowledge is an important part of the real systems and the study of its proprieties using the methods and techniques of grey system theory remains an important direction of the researches. Originality/value – The paper discusses the differences among the three types of knowledge and how they and the grey systems theory can be used in different hybrid architectures.


Sign in / Sign up

Export Citation Format

Share Document