scholarly journals The social dilemma in artificial intelligence development and why we have to solve it

AI and Ethics ◽  
2021 ◽  
Author(s):  
Inga Strümke ◽  
Marija Slavkovik ◽  
Vince Istvan Madai

AbstractWhile the demand for ethical artificial intelligence (AI) systems increases, the number of unethical uses of AI accelerates, even though there is no shortage of ethical guidelines. We argue that a possible underlying cause for this is that AI developers face a social dilemma in AI development ethics, preventing the widespread adaptation of ethical best practices. We define the social dilemma for AI development and describe why the current crisis in AI development ethics cannot be solved without relieving AI developers of their social dilemma. We argue that AI development must be professionalised to overcome the social dilemma, and discuss how medicine can be used as a template in this process.

2019 ◽  
Vol 5 (2) ◽  
pp. 62-68
Author(s):  
Monika Kaczmarek-Śliwińska

Abstract Organisational communication in the age of artificial intelligence (AI) development is an opportunity but also a challenge. Thanks to the changing media space and the development of technology, it is possible to automate work, increase the effectiveness and power of influence and distribution of content. However, they also raise questions concerning risks, ranging from those associated with the social area (reducing the number of jobs) to the ethics of communication and the ethics of the professional profession of public relations (still PR ethics or the AI ethics in PR). The article will outline the opportunities and concerns resulting from the use of AI in communication of an organisation.


Author(s):  
Andrea Renda

This chapter assesses Europe’s efforts in developing a full-fledged strategy on the human and ethical implications of artificial intelligence (AI). The strong focus on ethics in the European Union’s AI strategy should be seen in the context of an overall strategy that aims at protecting citizens and civil society from abuses of digital technology but also as part of a competitiveness-oriented strategy aimed at raising the standards for access to Europe’s wealthy Single Market. In this context, one of the most peculiar steps in the European Union’s strategy was the creation of an independent High-Level Expert Group on AI (AI HLEG), accompanied by the launch of an AI Alliance, which quickly attracted several hundred participants. The AI HLEG, a multistakeholder group including fifty-two experts, was tasked with the definition of Ethics Guidelines as well as with the formulation of “Policy and Investment Recommendations.” With the advice of the AI HLEG, the European Commission put forward ethical guidelines for Trustworthy AI—which are now paving the way for a comprehensive, risk-based policy framework.


Author(s):  
Christian List

AbstractThe aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical.


Vaccines ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 173
Author(s):  
Davide Gori ◽  
Chiara Reno ◽  
Daniel Remondini ◽  
Francesco Durazzi ◽  
Maria Pia Fantini

While the SARS-CoV-2 pandemic continues to strike and collect its death toll throughout the globe, as of 31 January 2021, the vaccine candidates worldwide were 292, of which 70 were in clinical testing. Several vaccines have been approved worldwide, and in particular, three have been so far authorized for use in the EU. Vaccination can be, in fact, an efficient way to mitigate the devastating effect of the pandemic and offer protection to some vulnerable strata of the population (i.e., the elderly) and reduce the social and economic burden of the current crisis. Regardless, a question is still open: after vaccination availability for the public, will vaccination campaigns be effective in reaching all the strata and a sufficient number of people in order to guarantee herd immunity? In other words: after we have it, will we be able to use it? Following the trends in vaccine hesitancy in recent years, there is a growing distrust of COVID-19 vaccinations. In addition, the online context and competition between pro- and anti-vaxxers show a trend in which anti-vaccination movements tend to capture the attention of those who are hesitant. Describing this context and analyzing its possible causes, what interventions or strategies could be effective to reduce COVID-19 vaccine hesitancy? Will social media trend analysis be helpful in trying to solve this complex issue? Are there perspectives for an efficient implementation of COVID-19 vaccination coverage as well as for all the other vaccinations?


Author(s):  
Tripat Gill

AbstractThe ethical dilemma (ED) of whether autonomous vehicles (AVs) should protect the passengers or pedestrians when harm is unavoidable has been widely researched and debated. Several behavioral scientists have sought public opinion on this issue, based on the premise that EDs are critical to resolve for AV adoption. However, many scholars and industry participants have downplayed the importance of these edge cases. Policy makers also advocate a focus on higher level ethical principles rather than on a specific solution to EDs. But conspicuously absent from this debate is the view of the consumers or potential adopters, who will be instrumental to the success of AVs. The current research investigated this issue both from a theoretical standpoint and through empirical research. The literature on innovation adoption and risk perception suggests that EDs will be heavily weighted by potential adopters of AVs. Two studies conducted with a broad sample of consumers verified this assertion. The results from these studies showed that people associated EDs with the highest risk and considered EDs as the most important issue to address as compared to the other technical, legal and ethical issues facing AVs. As such, EDs need to be addressed to ensure robustness in the design of AVs and to assure consumers of the safety of this promising technology. Some preliminary evidence is provided about interventions to resolve the social dilemma in EDs and about the ethical preferences of prospective early adopters of AVs.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ozan Isler ◽  
Simon Gächter ◽  
A. John Maule ◽  
Chris Starmer

AbstractHumans frequently cooperate for collective benefit, even in one-shot social dilemmas. This provides a challenge for theories of cooperation. Two views focus on intuitions but offer conflicting explanations. The Social Heuristics Hypothesis argues that people with selfish preferences rely on cooperative intuitions and predicts that deliberation reduces cooperation. The Self-Control Account emphasizes control over selfish intuitions and is consistent with strong reciprocity—a preference for conditional cooperation in one-shot dilemmas. Here, we reconcile these explanations with each other as well as with strong reciprocity. We study one-shot cooperation across two main dilemma contexts, provision and maintenance, and show that cooperation is higher in provision than maintenance. Using time-limit manipulations, we experimentally study the cognitive processes underlying this robust result. Supporting the Self-Control Account, people are intuitively selfish in maintenance, with deliberation increasing cooperation. In contrast, consistent with the Social Heuristics Hypothesis, deliberation tends to increase the likelihood of free-riding in provision. Contextual differences between maintenance and provision are observed across additional measures: reaction time patterns of cooperation; social dilemma understanding; perceptions of social appropriateness; beliefs about others’ cooperation; and cooperation preferences. Despite these dilemma-specific asymmetries, we show that preferences, coupled with beliefs, successfully predict the high levels of cooperation in both maintenance and provision dilemmas. While the effects of intuitions are context-dependent and small, the widespread preference for strong reciprocity is the primary driver of one-shot cooperation. We advance the Contextualised Strong Reciprocity account as a unifying framework and consider its implications for research and policy.


2020 ◽  
Vol 23 (3) ◽  
pp. 230-236
Author(s):  
Kristof Van Assche ◽  
Martijn Duineveld ◽  
S. Jeff Birchall ◽  
Leith Deacon ◽  
Raoul Beunen ◽  
...  

Quarantine measures and the crises triggering them are never neutral in the sense that a return to the past is impossible. These measures are also a signal of other things like systemic risks and weaknesses. A period of quarantine is also a thing in and by itself. What happens after quarantine is thus shaped both by the state of the social-ecological system preceding quarantine and by what happened during quarantine. The selectivities introduced during quarantine span discursive, institutional and material realms. Old discourses can return with a new meaning. Social and economic relations can reappear seemingly unchanged, they can be more visibly altered and they can be dismantled. Ideologies, however, to be understood here as master discourses, read problems and solutions in their own way and do not necessarily come closer to each other or disappear. All this, offers food for thought regarding the possibilities and limits of resilience and transition. We argue that the current COVID- 19 pandemic casts doubt on the generic applicability of theories of resilience and transition, yet also sheds a new light on the value of both. We propose the concept of reinvention to describe what is happening and what could happen in a more coordinated fashion. We argue that the current crisis reveals mechanisms in systems dynamics that point at the existence of multiple pathways after dramatic system shocks. Some shocks and their system- specific responses (such as a particular kind of quarantine) are more amenable to resilience strategies afterwards, while others require a path of radical transition. They might also both be needed: a rather stark transition now might ensure future resilience. While the outline of the system after transition is not clear, some desirable features are clear as are the risks and damages of the current system. Also clear is the argument for transitional governance, a temporary governance system (beyond quarantine) which can enable the construction of new long term perspectives in governance and new governance tools meant to reduce chances of a crisis like this one reoccuring.


Sign in / Sign up

Export Citation Format

Share Document