scholarly journals Coarse ethics: how to ethically assess explainable artificial intelligence

AI and Ethics ◽  
2021 ◽  
Author(s):  
Takashi Izumo ◽  
Yueh-Hsuan Weng

AbstractThe integration of artificial intelligence (AI) into human society mandates that their decision-making process is explicable to users, as exemplified in Asimov’s Three Laws of Robotics. Such human interpretability calls for explainable AI (XAI), of which this paper cites various models. However, the transaction between computable accuracy and human interpretability can be a trade-off, requiring answers to questions about the negotiable conditions and the degrees of AI prediction accuracy that may be sacrificed to enable user-interpretability. The extant research has focussed on technical issues, but it is also desirable to apply a branch of ethics to deal with the trade-off problem. This scholarly domain is labelled coarse ethics in this study, which discusses two issues vis-à-vis AI prediction as a type of evaluation. First, which formal conditions would allow trade-offs? The study posits two minimal requisites: adequately high coverage and order-preservation. The second issue concerns conditions that could justify the trade-off between computable accuracy and human interpretability, to which the study suggests two justification methods: impracticability and adjustment of perspective from machine-computable to human-interpretable. This study contributes by connecting ethics to autonomous systems for future regulation by formally assessing the adequacy of AI rationales.

2020 ◽  
pp. 1-19
Author(s):  
SAM DESIERE ◽  
LUDO STRUYVEN

Abstract Artificial intelligence (AI) is increasingly popular in the public sector to improve the cost-efficiency of service delivery. One example is AI-based profiling models in public employment services (PES), which predict a jobseeker’s probability of finding work and are used to segment jobseekers in groups. Profiling models hold the potential to improve identification of jobseekers at-risk of becoming long-term unemployed, but also induce discrimination. Using a recently developed AI-based profiling model of the Flemish PES, we assess to what extent AI-based profiling ‘discriminates’ against jobseekers of foreign origin compared to traditional rule-based profiling approaches. At a maximum level of accuracy, jobseekers of foreign origin who ultimately find a job are 2.6 times more likely to be misclassified as ‘high-risk’ jobseekers. We argue that it is critical that policymakers and caseworkers understand the inherent trade-offs of profiling models, and consider the limitations when integrating these models in daily operations. We develop a graphical tool to visualize the accuracy-equity trade-off in order to facilitate policy discussions.


2020 ◽  
Vol 9 (11) ◽  
pp. 207
Author(s):  
João Reis ◽  
Paula Santo ◽  
Nuno Melão

In the last six decades, many advances have been made in the field of artificial intelligence (AI). Bearing in mind that AI technologies are influencing societies and political systems differently, it can be useful to understand what are the common issues between similar states in the European Union and how these political systems can collaborate with each other, seeking synergies, finding opportunities and saving costs. Therefore, we carried out an exploratory research among similar states of the European Union, in terms of scientific research in areas of AI technologies, namely: Portugal, Greece, Austria, Belgium and Sweden. A key finding of this research is that intelligent decision support systems (IDSS) are essential for the political decision-making process, since politics normally deals with complex and multifaceted decisions, which involve trade-offs between different stakeholders. As public health is becoming increasingly relevant in the field of the European Union, the IDSSs can provide relevant contributions, as it may allow sharing critical information and assist in the political decision-making process, especially in response to crisis situations.


2012 ◽  
Vol 11 (3) ◽  
pp. 118-126 ◽  
Author(s):  
Olive Emil Wetter ◽  
Jürgen Wegge ◽  
Klaus Jonas ◽  
Klaus-Helmut Schmidt

In most work contexts, several performance goals coexist, and conflicts between them and trade-offs can occur. Our paper is the first to contrast a dual goal for speed and accuracy with a single goal for speed on the same task. The Sternberg paradigm (Experiment 1, n = 57) and the d2 test (Experiment 2, n = 19) were used as performance tasks. Speed measures and errors revealed in both experiments that dual as well as single goals increase performance by enhancing memory scanning. However, the single speed goal triggered a speed-accuracy trade-off, favoring speed over accuracy, whereas this was not the case with the dual goal. In difficult trials, dual goals slowed down scanning processes again so that errors could be prevented. This new finding is particularly relevant for security domains, where both aspects have to be managed simultaneously.


2019 ◽  
Author(s):  
Anna Katharina Spälti ◽  
Mark John Brandt ◽  
Marcel Zeelenberg

People often have to make trade-offs. We study three types of trade-offs: 1) "secular trade-offs" where no moral or sacred values are at stake, 2) "taboo trade-offs" where sacred values are pitted against financial gain, and 3) "tragic trade-offs" where sacred values are pitted against other sacred values. Previous research (Critcher et al., 2011; Tetlock et al., 2000) demonstrated that tragic and taboo trade-offs are not only evaluated by their outcomes, but are also evaluated based on the time it took to make the choice. We investigate two outstanding questions: 1) whether the effect of decision time differs for evaluations of decisions compared to decision makers and 2) whether moral contexts are unique in their ability to influence character evaluations through decision process information. In two experiments (total N = 1434) we find that decision time affects character evaluations, but not evaluations of the decision itself. There were no significant differences between tragic trade-offs and secular trade-offs, suggesting that the decisions structure may be more important in evaluations than moral context. Additionally, the magnitude of the effect of decision time shows us that decision time, may be of less practical use than expected. We thus urge, to take a closer examination of the processes underlying decision time and its perception.


2019 ◽  
Author(s):  
Kasper Van Mens ◽  
Joran Lokkerbol ◽  
Richard Janssen ◽  
Robert de Lange ◽  
Bea Tiemens

BACKGROUND It remains a challenge to predict which treatment will work for which patient in mental healthcare. OBJECTIVE In this study we compare machine algorithms to predict during treatment which patients will not benefit from brief mental health treatment and present trade-offs that must be considered before an algorithm can be used in clinical practice. METHODS Using an anonymized dataset containing routine outcome monitoring data from a mental healthcare organization in the Netherlands (n = 2,655), we applied three machine learning algorithms to predict treatment outcome. The algorithms were internally validated with cross-validation on a training sample (n = 1,860) and externally validated on an unseen test sample (n = 795). RESULTS The performance of the three algorithms did not significantly differ on the test set. With a default classification cut-off at 0.5 predicted probability, the extreme gradient boosting algorithm showed the highest positive predictive value (ppv) of 0.71(0.61 – 0.77) with a sensitivity of 0.35 (0.29 – 0.41) and area under the curve of 0.78. A trade-off can be made between ppv and sensitivity by choosing different cut-off probabilities. With a cut-off at 0.63, the ppv increased to 0.87 and the sensitivity dropped to 0.17. With a cut-off of at 0.38, the ppv decreased to 0.61 and the sensitivity increased to 0.57. CONCLUSIONS Machine learning can be used to predict treatment outcomes based on routine monitoring data.This allows practitioners to choose their own trade-off between being selective and more certain versus inclusive and less certain.


Author(s):  
Steven Bernstein

This commentary discusses three challenges for the promising and ambitious research agenda outlined in the volume. First, it interrogates the volume’s attempts to differentiate political communities of legitimation, which may vary widely in composition, power, and relevance across institutions and geographies, with important implications not only for who matters, but also for what gets legitimated, and with what consequences. Second, it examines avenues to overcome possible trade-offs from gains in empirical tractability achieved through the volume’s focus on actor beliefs and strategies. One such trade-off is less attention to evolving norms and cultural factors that may underpin actors’ expectations about what legitimacy requires. Third, it addresses the challenge of theory building that can link legitimacy sources, (de)legitimation practices, audiences, and consequences of legitimacy across different types of institutions.


2020 ◽  
Vol 31 (3) ◽  
pp. 347-363
Author(s):  
Peter Waring ◽  
Azad Bali ◽  
Chris Vas

The race to develop and implement autonomous systems and artificial intelligence has challenged the responsiveness of governments in many areas and none more so than in the domain of labour market policy. This article draws upon a large survey of Singaporean employees and managers (N = 332) conducted in 2019 to examine the extent and ways in which artificial intelligence and autonomous technologies have begun impacting workplaces in Singapore. Our conclusions reiterate the need for government intervention to facilitate broad-based participation in the productivity benefits of fourth industrial revolution technologies while also offering re-designed social safety nets and employment protections. JEL Codes: J88, K31, O38, M53


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
J. Raymond Geis ◽  
Adrian Brady ◽  
Carol C. Wu ◽  
Jack Spencer ◽  
Erik Ranschaert ◽  
...  

Abstract This is a condensed summary of an international multisociety statement on ethics of artificial intelligence (AI) in radiology produced by the ACR, European Society of Radiology, RSNA, Society for Imaging Informatics in Medicine, European Society of Medical Imaging Informatics, Canadian Association of Radiologists, and American Association of Physicists in Medicine. AI has great potential to increase efficiency and accuracy throughout radiology, but also carries inherent pitfalls and biases. Widespread use of AI-based intelligent and autonomous systems in radiology can increase the risk of systemic errors with high consequence, and highlights complex ethical and societal issues. Currently, there is little experience using AI for patient care in diverse clinical settings. Extensive research is needed to understand how to best deploy AI in clinical practice. This statement highlights our consensus that ethical use of AI in radiology should promote well-being, minimize harm, and ensure that the benefits and harms are distributed among stakeholders in a just manner. We believe AI should respect human rights and freedoms, including dignity and privacy. It should be designed for maximum transparency and dependability. Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future. The radiology community should start now to develop codes of ethics and practice for AI which promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes.


Sign in / Sign up

Export Citation Format

Share Document