scholarly journals Ethical Management of Artificial Intelligence

2021 ◽  
Vol 13 (4) ◽  
pp. 1974
Author(s):  
Alfred Benedikt Brendel ◽  
Milad Mirbabaie ◽  
Tim-Benjamin Lembcke ◽  
Lennart Hofeditz

With artificial intelligence (AI) becoming increasingly capable of handling highly complex tasks, many AI-enabled products and services are granted a higher autonomy of decision-making, potentially exercising diverse influences on individuals and societies. While organizations and researchers have repeatedly shown the blessings of AI for humanity, serious AI-related abuses and incidents have raised pressing ethical concerns. Consequently, researchers from different disciplines widely acknowledge an ethical discourse on AI. However, managers—eager to spark ethical considerations throughout their organizations—receive limited support on how they may establish and manage AI ethics. Although research is concerned with technological-related ethics in organizations, research on the ethical management of AI is limited. Against this background, the goals of this article are to provide a starting point for research on AI-related ethical concerns and to highlight future research opportunities. We propose an ethical management of AI (EMMA) framework, focusing on three perspectives: managerial decision making, ethical considerations, and macro- as well as micro-environmental dimensions. With the EMMA framework, we provide researchers with a starting point to address the managing the ethical aspects of AI.

AI & Society ◽  
2021 ◽  
Author(s):  
Milad Mirbabaie ◽  
Lennart Hofeditz ◽  
Nicholas R. J. Frick ◽  
Stefan Stieglitz

AbstractThe application of artificial intelligence (AI) in hospitals yields many advantages but also confronts healthcare with ethical questions and challenges. While various disciplines have conducted specific research on the ethical considerations of AI in hospitals, the literature still requires a holistic overview. By conducting a systematic discourse approach highlighted by expert interviews with healthcare specialists, we identified the status quo of interdisciplinary research in academia on ethical considerations and dimensions of AI in hospitals. We found 15 fundamental manuscripts by constructing a citation network for the ethical discourse, and we extracted actionable principles and their relationships. We provide an agenda to guide academia, framed under the principles of biomedical ethics. We provide an understanding of the current ethical discourse of AI in clinical environments, identify where further research is pressingly needed, and discuss additional research questions that should be addressed. We also guide practitioners to acknowledge AI-related benefits in hospitals and to understand the related ethical concerns.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Sascha Raithel ◽  
Alexander Mafael ◽  
Stefan J. Hock

Purpose There is limited insight concerning a firm’s remedy choice after a product recall. This study aims to propose that failure severity and brand equity are key antecedents of remedy choice and provides empirical evidence for a non-linear relationship between pre-recall brand equity and the firm’s remedy offer that is moderated by severity. Design/methodology/approach This study uses field data for 159 product recalls from 60 brands between January 2008 to February 2020 to estimate a probit model of the effects of failure severity, pre-recall brand equity and remedy choice. Findings Firms with higher and lower pre-recall brand equity are less likely to offer full (vs partial) remedy compared to medium level pre-recall brand equity firms. Failure severity moderates this relationship positively, i.e. firms with low and high brand equity are more sensitive to failure severity and then select full instead of partial remedy. Research limitations/implications This research reconciles contradictory arguments and research results about failure severity as an antecedent of remedy choice by introducing brand equity as another key variable. Future research could examine the psychological process of managerial decision-making through experiments. Practical implications This study increases the awareness of the importance of remedy choice during product-harm crises and can help firms and regulators to better understand managerial decision-making mechanisms (and fallacies) during a product-harm crisis. Originality/value This study theoretically and empirically advances the limited literature on managerial decision-making in response to product recalls.


2020 ◽  
pp. 277-288
Author(s):  
Abílio Azevedo ◽  
Patricia Anjos Azevedo

The use and possibilities of artificial intelligence (AI) have been assuming great importance in recent years. This fact led to a greater attention on the topic in various fields, especially in health and law, both in its daily application potential and in learning methods. The aim of this article was to present a brief perspective of the challenges and effects of the AI use in teaching and application on health and law domains. Therefore, to better define the theme it was performed a qualitative methodology of bibliographic review. The applications of artificial intelligence have a great potential in clinical and legal use, facilitating the tasks of those involved by helping to reduce workflow, to avoid errors and in decision-making. However, despite these benefits and new opportunities, there are still obstacles regarding regulation and ethical concerns, as well as some reluctance from professionals in their adoption and formal application. In addition, there also the need to proper implement these technologies in learning to keep up the change and the new challenges currently posed, so there is a path that still needs to be followed.


Author(s):  
Peter Gál ◽  
Miloš Mrva ◽  
Matej Meško

The aim of the paper is to demonstrate the impact of heuristics, biases and psychological traps on the decision making. Heuristics are unconscious routines people use to cope with the complexity inherent in most decision situations. They serve as mental shortcuts that help people to simplify and structure the information encountered in the world. These heuristics could be quite useful in some situations, while in others they can lead to severe and systematic errors, based on significant deviations from the fundamental principles of statistics, probability and sound judgment. This paper focuses on illustrating the existence of the anchoring, availability, and representativeness heuristics, originally described by Tversky & Kahneman in the early 1970’s. The anchoring heuristic is a tendency to focus on the initial information, estimate or perception (even random or irrelevant number) as a starting point. People tend to give disproportionate weight to the initial information they receive. The availability heuristic explains why highly imaginable or vivid information have a disproportionate effect on people’s decisions. The representativeness heuristic causes that people rely on highly specific scenarios, ignore base rates, draw conclusions based on small samples and neglect scope. Mentioned phenomena are illustrated and supported by evidence based on the statistical analysis of the results of a questionnaire.


Author(s):  
Viktor Elliot ◽  
Mari Paananen ◽  
Miroslaw Staron

We propose an exercise with the purpose of providing a basic understanding of key concepts within AI and extending the understanding of AI beyond mathematics. The exercise allows participants to carry out analysis based on accounting data using visualization tools as well as to develop their own machine learning algorithms that can mimic their decisions. Finally, we also problematize the use of AI in decision-making, with such aspects as biases in data and/or ethical concerns.


Author(s):  
Simona Hašková ◽  
◽  
Jakub Horák ◽  

Qualitative and quantitative approaches to multicriteria evaluation and managerial decision- making often ignore the specifics of the role of the human factor. This article summarizes management methods that reflect not only numerical inputs but also data of a qualitative nature while considering their applicability in the tourism sector. Some of them can be assorted within the classes of Artificial intelligence. The focus is on the fuzzy approach at the theoretical and application level. The fuzzy approach is used to evaluate the degree of country travel and tourism competitiveness of selected European and Asian countries based on subjective rankings from the viewpoint of travelling persons. The results indicate that among countries under review, China is ranked as a highly competitive country in travel & tourism. Conditional competitive countries in terms of travel & tourism are the Czech Republic, Pakistan, Russia, and Turkey.


2021 ◽  
Vol 59 (2) ◽  
pp. 123-140
Author(s):  
Milena Galetin ◽  
Anica Milovanović

Considering the possibility of using artificial intelligence in resolving legal disputes is becoming increasingly popular. The authors examine whether soft ware analysis can be applied to resolve a specific issue in investment disputes - to determine the applicable law to the substance of the dispute and highlight the application of artificial intelligence in the area of law, especially in predicting the outcome of a dispute. The starting point is a sample of 50 arbitral awards and the results of previously conducted research. It has been confirmed that soft ware analysis can be useful in decision-making processes, but not to the extent that arbitrators could exclusively rely on it. On the other hand, the development of an algorithm that would predict applicable law for different legal issues required a much larger sample. We also believe that the existence of different legal and factual circumstances in each case, as well as the personality of the arbitrator and arbitral/judicial discretion are limitations of the application of artificial intelligence in this area.


2021 ◽  
Vol 2 (1) ◽  
pp. 106-113
Author(s):  
Ádám Auer

Összefoglaló. A tanulmány kezdő axiómája a mesterséges intelligencia biztonságos alkalmazása. A biztonságos alkalmazás egyik aspektusa a jogi biztonság, az a jogi környezet, amelyben a felmerülő jogi kérdések rendezésére alkalmazható keretrendszer áll rendelkezésre. A tanulmány a Semmelweis Egyetem projektjében fejlesztett mesterséges intelligencia alkalmazásának olyan polgári jogi problémáit vizsgálja, amelyek a mindennapi hasznosítás során merülhetnek fel. A tanulmány következtetése szerint a vizsgált mesterséges intelligencia szerzői műnek minősül és több védelmi forma is alkalmazható. A jogi szabályozás de lege ferenda kiegészítésre szorul a szerzői mű folyamatos változása okán. Szükséges rögzíteni egy referenciapontot, amely a felelősség kiindulópontjául szolgál. Summary. The starting point of the study is the safe use of artificial intelligence. Legal certainty is one aspect of safe usage, the legal environment in which a framework is available that can be used to resolve legal issues. The paper examines the civil law issues that may arise in the everyday use of the artificial intelligence application developed within the Semmelweis University project. The study will first focus on the legal protection of the Semmelweis AI, including whether this protection is currently international, regional (European Union) or national and which of these is the optimal choice. The study also reflects on the legislative preparatory work of the European Union in this regard. Our hypothesis is that the majority of civil law areas concerning AI can be regulated within a contractual framework. The AI software developed by the project is a forward-looking medical and practical solution. If we want to use a legal analogy, we can imagine its operation as if we had a solution that could analyse all the national court decisions in each legal field and provide an answer to the legal problem at hand, while simultaneously learning and applying the latest court decisions every day. For this AI solution, the diagnostic process must be carefully examined in order to identify the legal problems. I believe that the optimal solution is to classify this AI application as ‘software’ because this allows property rights to be acquired in their entirety and it opens the door to clarifying individual associated usage and copyright by contract. An important civil law question arises in relation to parallel copyright protection, when the individual personal contributions (creative development work) to the software cannot be separated. Therefore, it is important to record the process and to separate the individual contributions protecting by copyright. The AI plays a questionable role in the diagnostic process. If the software itself cannot make a decision, but only provides a framework and platform, then it will not be entitled to co-ownership relating to the diagnostic images (e.g. just as a camera will not own the rights to the pictures taken with it). However, if the algorithm is part of the decision-making (e.g. the selecting of negative diagnoses), it would possibly be co-owner of the right, because it was involved in the development of the classification. All this should be clearly stated in the licence agreement, based on full knowledge of the decision-making process. However, de lege ferenda, the legal regime needs to be supplemented in view of the constant changes of the copyright work and the changing authors. There is a need to establish a specific point in the legislation that serves as a reference point for liability and legal protection. The issues under consideration are of a legal security nature, since without precise legal protection both the creator of artificial intelligence and the persons who may be held liable in the event of a malfunctioning of such systems may be uncertain.


2019 ◽  
Vol 162 (1) ◽  
pp. 38-39
Author(s):  
Alexandra M. Arambula ◽  
Andrés M. Bur

Artificial intelligence (AI) is quickly expanding within the sphere of health care, offering the potential to enhance the efficiency of care delivery, diminish costs, and reduce diagnostic and therapeutic errors. As the field of otolaryngology also explores use of AI technology in patient care, a number of ethical questions warrant attention prior to widespread implementation of AI. This commentary poses many of these ethical questions for consideration by the otolaryngologist specifically, using the 4 pillars of medical ethics—autonomy, beneficence, nonmaleficence, and justice—as a framework and advocating both for the assistive role of AI in health care and for the shared decision-making, empathic approach to patient care.


Sign in / Sign up

Export Citation Format

Share Document