AI and Ethics
Latest Publications


TOTAL DOCUMENTS

117
(FIVE YEARS 117)

H-INDEX

1
(FIVE YEARS 1)

Published By Springer Science And Business Media LLC

2730-5953, 2730-5961

AI and Ethics ◽  
2022 ◽  
Author(s):  
Edmund Ofosu Benefo ◽  
Aubrey Tingler ◽  
Madeline White ◽  
Joel Cover ◽  
Liana Torres ◽  
...  

AI and Ethics ◽  
2022 ◽  
Author(s):  
Ekaterina Svetlova

AbstractThe paper suggests that AI ethics should pay attention to morally relevant systemic effects of AI use. It draws the attention of ethicists and practitioners to systemic risks that have been neglected so far in professional AI-related codes of conduct, industrial standards and ethical discussions more generally. The paper uses the financial industry as an example to ask: how can AI-enhanced systemic risks be ethically accounted for? Which specific issues does AI use raise for ethics that takes systemic effects into account? The paper (1) relates the literature about AI ethics to the ethics of systemic risks to clarify the moral relevance of AI use with respect to the imposition of systemic risks, (2) proposes a theoretical framework based on the ethics of complexity and (3) applies this framework to discuss implications for AI ethics concerned with AI-enhanced systemic risks.


AI and Ethics ◽  
2021 ◽  
Author(s):  
Erik Persson ◽  
Maria Hedlund

AbstractArtificial intelligence (AI) is becoming increasingly influential in most people’s lives. This raises many philosophical questions. One is what responsibility we have as individuals to guide the development of AI in a desirable direction. More specifically, how should this responsibility be distributed among individuals and between individuals and other actors? We investigate this question from the perspectives of five principles of distribution that dominate the discussion about responsibility in connection with climate change: effectiveness, equality, desert, need, and ability. Since much is already written about these distributions in that context, we believe much can be gained if we can make use of this discussion also in connection with AI. Our most important findings are: (1) Different principles give different answers depending on how they are interpreted but, in many cases, different interpretations and different principles agree and even strengthen each other. If for instance ‘equality-based distribution’ is interpreted in a consequentialist sense, effectiveness, and through it, ability, will play important roles in the actual distributions, but so will an equal distribution as such, since we foresee that an increased responsibility of underrepresented groups will make the risks and benefits of AI more equally distributed. The corresponding reasoning is true for need-based distribution. (2) If we acknowledge that someone has a certain responsibility, we also have to acknowledge a corresponding degree of influence for that someone over the matter in question. (3) Independently of which distribution principle we prefer, ability cannot be dismissed. Ability is not fixed, however and if one of the other distributions is morally required, we are also morally required to increase the ability of those less able to take on the required responsibility.


AI and Ethics ◽  
2021 ◽  
Author(s):  
Christoph Trattner ◽  
Dietmar Jannach ◽  
Enrico Motta ◽  
Irene Costera Meijer ◽  
Nicholas Diakopoulos ◽  
...  

AbstractThe last two decades have witnessed major disruptions to the traditional media industry as a result of technological breakthroughs. New opportunities and challenges continue to arise, most recently as a result of the rapid advance and adoption of artificial intelligence technologies. On the one hand, the broad adoption of these technologies may introduce new opportunities for diversifying media offerings, fighting disinformation, and advancing data-driven journalism. On the other hand, techniques such as algorithmic content selection and user personalization can introduce risks and societal threats. The challenge of balancing these opportunities and benefits against their potential for negative impacts underscores the need for more research in responsible media technology. In this paper, we first describe the major challenges—both for societies and the media industry—that come with modern media technology. We then outline various places in the media production and dissemination chain, where research gaps exist, where better technical approaches are needed, and where technology must be designed in a way that can effectively support responsible editorial processes and principles. We argue that a comprehensive approach to research in responsible media technology, leveraging an interdisciplinary approach and a close cooperation between the media industry and academic institutions, is urgently needed.


AI and Ethics ◽  
2021 ◽  
Author(s):  
Pamela Ugwudike

AbstractOrganisations, governments, institutions and others across several jurisdictions are using AI systems for a constellation of high-stakes decisions that pose implications for human rights and civil liberties. But a fast-growing multidisciplinary scholarship on AI bias is currently documenting problems such as the discriminatory labelling and surveillance of historically marginalised subgroups. One of the ways in which AI systems generate such downstream outcomes is through their inputs. This paper focuses on a specific input dynamic which is the theoretical foundation that informs the design, operation, and outputs of such systems. The paper uses the set of technologies known as predictive policing algorithms as a case example to illustrate how theoretical assumptions can pose adverse social consequences and should therefore be systematically evaluated during audits if the objective is to detect unknown risks, avoid AI harms, and build ethical systems. In its analysis of these issues, the paper adds a new dimension to the literature on AI ethics and audits by investigating algorithmic impact in the context of underpinning theory. In doing so, the paper provides insights that can usefully inform auditing policy and practice instituted by relevant stakeholders including the developers, vendors, and procurers of AI systems as well as independent auditors.


AI and Ethics ◽  
2021 ◽  
Author(s):  
Thilo Hagendorff

AbstractThis paper critically discusses blind spots in AI ethics. AI ethics discourses typically stick to a certain set of topics concerning principles evolving mainly around explainability, fairness, and privacy. All these principles can be framed in a way that enables their operationalization by technical means. However, this requires stripping down the multidimensionality of very complex social constructs to something that is idealized, measurable, and calculable. Consequently, rather conservative, mainstream notions of the mentioned principles are conveyed, whereas critical research, alternative perspectives, and non-ideal approaches are largely neglected. Hence, one part of the paper considers specific blind spots regarding the very topics AI ethics focusses on. The other part, then, critically discusses blind spots regarding to topics that hold significant ethical importance but are hardly or not discussed at all in AI ethics. Here, the paper focuses on negative externalities of AI systems, exemplarily discussing the casualization of clickwork, AI ethics’ strict anthropocentrism, and AI’s environmental impact. Ultimately, the paper is intended to be a critical commentary on the ongoing development of the field of AI ethics. It makes the case for a rediscovery of the strength of ethics in the AI field, namely its sensitivity to suffering and harms that are caused by and connected to AI technologies.


AI and Ethics ◽  
2021 ◽  
Author(s):  
Christian Herzog

AbstractThis Comment explores the implications of a lack of tools that facilitate an explicable utilization of epistemologically richer, but also more involved white-box approaches in AI. In contrast, advances in explainable artificial intelligence for black-box approaches have led to the availability of semi-standardized and attractive toolchains that offer a seemingly competitive edge over inherently interpretable white-box models in terms of intelligibility towards users. Consequently, there is a need for research on efficient tools for rendering interpretable white-box approaches in AI explicable to facilitate responsible use.


AI and Ethics ◽  
2021 ◽  
Author(s):  
Inga Strümke ◽  
Marija Slavkovik ◽  
Vince Istvan Madai

AbstractWhile the demand for ethical artificial intelligence (AI) systems increases, the number of unethical uses of AI accelerates, even though there is no shortage of ethical guidelines. We argue that a possible underlying cause for this is that AI developers face a social dilemma in AI development ethics, preventing the widespread adaptation of ethical best practices. We define the social dilemma for AI development and describe why the current crisis in AI development ethics cannot be solved without relieving AI developers of their social dilemma. We argue that AI development must be professionalised to overcome the social dilemma, and discuss how medicine can be used as a template in this process.


AI and Ethics ◽  
2021 ◽  
Author(s):  
Henrik Skaug Sætra ◽  
Mark Coeckelbergh ◽  
John Danaher

AbstractAssume that a researcher uncovers a major problem with how social media are currently used. What sort of challenges arise when they must subsequently decide whether or not to use social media to create awareness about this problem? This situation routinely occurs as ethicists navigate choices regarding how to effect change and potentially remedy the problems they uncover. In this article, challenges related to new technologies and what is often referred to as ‘Big Tech’ are emphasized. We present what we refer to as the AI ethicist’s dilemma, which emerges when an AI ethicist has to consider how their own success in communicating an identified problem is associated with a high risk of decreasing the chances of successfully remedying the problem. We examine how the ethicist can resolve the dilemma and arrive at ethically sound paths of action through combining three ethical theories: virtue ethics, deontological ethics and consequentialist ethics. The article concludes that attempting to change the world of Big Tech only using the technologies and tools they provide will at times prove to be counter-productive, and that political and other more disruptive avenues of action should also be seriously considered by ethicists who want to effect long-term change. Both strategies have advantages and disadvantages, and a combination might be desirable to achieve these advantages and mitigate some of the disadvantages discussed.


Sign in / Sign up

Export Citation Format

Share Document