scholarly journals Artificial Intelligence as a positive and negative factor in global risk

Author(s):  
Eliezer Yudkowsky

By far the greatest danger of Artificial Intelligence (AI) is that people conclude too early that they understand it. Of course, this problem is not limited to the field of AI. Jacques Monod wrote: ‘A curious aspect of the theory of evolution is that everybody thinks he understands it’ (Monod, 1974). The problem seems to be unusually acute in Artificial Intelligence. The field of AI has a reputation for making huge promises and then failing to deliver on them. Most observers conclude that AI is hard, as indeed it is. But the embarrassment does not stem from the difficulty. It is difficult to build a star from hydrogen, but the field of stellar astronomy does not have a terrible reputation for promising to build stars and then failing. The critical inference is not that AI is hard, but that, for some reason, it is very easy for people to think they know far more about AI than they actually do. It may be tempting to ignore Artificial Intelligence because, of all the global risks discussed in this book, AI is probably hardest to discuss. We cannot consult actuarial statistics to assign small annual probabilities of catastrophe, as with asteroid strikes. We cannot use calculations from a precise, precisely confirmed model to rule out events or place infinitesimal upper bounds on their probability, as with proposed physics disasters. But this makes AI catastrophes more worrisome, not less. The effect of many cognitive biases has been found to increase with time pressure, cognitive busyness, or sparse information. Which is to say that the more difficult the analytic challenge, the more important it is to avoid or reduce bias. Therefore I strongly recommend reading my other chapter (Chapter 5) in this book before continuing with this chapter. When something is universal enough in our everyday lives, we take it for granted to the point of forgetting it exists. Imagine a complex biological adaptation with ten necessary parts. If each of the ten genes is independently at 50% frequency in the gene pool – each gene possessed by only half the organisms in that species – then, on average, only 1 in 1024 organisms will possess the full, functioning adaptation.

Author(s):  
Barbara Jane Holland

Today, companies can no longer assume that the past will be a good predictor of the future; Those that fail to prepare for radically new possibilities may face sudden irrelevance. Strategic Foresight, aka, Futures thinking, provides a structured approach enabling people and organizations to overcome cognitive biases and think more realistically about change. It helps to uncover blind spots, imagine radically different futures, and improve decision-making. Climate disruption, artificial intelligence, and automation are quickly transforming the landscape for business and sustainability. This chapter will review the Strategic Foresight tools used to embed long-term strategic thinking and planning concerning policy and strategy.


2022 ◽  
Vol 8 ◽  
Author(s):  
Antonio Oliva ◽  
Simone Grassi ◽  
Giuseppe Vetrugno ◽  
Riccardo Rossi ◽  
Gabriele Della Morte ◽  
...  

Artificial intelligence needs big data to develop reliable predictions. Therefore, storing and processing health data is essential for the new diagnostic and decisional technologies but, at the same time, represents a risk for privacy protection. This scoping review is aimed at underlying the medico-legal and ethical implications of the main artificial intelligence applications to healthcare, also focusing on the issues of the COVID-19 era. Starting from a summary of the United States (US) and European Union (EU) regulatory frameworks, the current medico-legal and ethical challenges are discussed in general terms before focusing on the specific issues regarding informed consent, medical malpractice/cognitive biases, automation and interconnectedness of medical devices, diagnostic algorithms and telemedicine. We aim at underlying that education of physicians on the management of this (new) kind of clinical risks can enhance compliance with regulations and avoid legal risks for the healthcare professionals and institutions.


2021 ◽  
Author(s):  
Nicolas Scharowski ◽  
Florian Brühlmann

In explainable artificial intelligence (XAI) research, explainability is widely regarded as crucial for user trust in artificial intelligence (AI). However, empirical investigations of this assumption are still lacking. There are several proposals as to how explainability might be achieved and it is an ongoing debate what ramifications explanations actually have on humans. In our work-in-progress we explored two posthoc explanation approaches presented in natural language as a means for explainable AI. We examined the effects of human-centered explanations on trust behavior in a financial decision-making experiment (N = 387), captured by weight of advice (WOA). Results showed that AI explanations lead to higher trust behavior if participants were advised to decrease an initial price estimate. However, explanations had no effect if the AI recommended to increase the initial price estimate. We argue that these differences in trust behavior may be caused by cognitive biases and heuristics that people retain in their decision-making processes involving AI. So far, XAI has primarily focused on biased data and prejudice due to incorrect assumptions in the machine learning process. The implications of potential biases and heuristics that humans exhibit when being presented an explanation by AI have received little attention in the current XAI debate. Both researchers and practitioners need to be aware of such human biases and heuristics in order to develop truly human-centered AI.


2019 ◽  
Vol 2 (1) ◽  
pp. 94-107
Author(s):  
John C. Simon ◽  
Stella Y.E. Pattipeilohy

Abstract: The evolutionary worldview confirms its position since the discovery of various ancient human sites, and continues to develop with various genetic engineerings and protein discoveries as well as advances in the field of artificial intelligence (AI) technology. Initially the religious community was the party who felt most attacked by the theory of evolution because it stripped the Bible of the truth about the creation of the world and humans. Later some Catholic Church appreciative statements about the theory of evolution and the big bang theory, including Pierre Teilhard de Chardin's attempt to explain the evolution of human consciousness towards the cosmic Christ, showed a change in religion towards acceptance of the diversity of world views: religion, culture and science. This evolutionary world development raises ethical questions about what is religion’s contribution. One of them is the awareness about shadow. The awareness is derived from religion which teaches that men are created by God that even though unique, but mortal and finite creations. Shadow is liberation so that men are not shackled to matter. He is a fragile human who longs to evolve to be a Christ as the perfect human image.Abstraksi:Pandangan dunia evolusioner meneguhkan kedudukannya sejak penemuan berbagai situs manusia purba, dan terus berkembang dengan berbagai penemuan rekayasa genetik dan protein dan kemajuan di bidang teknologi artificial intelligence (AI). Semula kalangan agama menjadi pihak yang merasa paling diserang dengan teori evolusi karena melucuti kebenaran Alkitab tentang penciptaan dunia dan manusia. Belakangan beberapa pernyataan apresiatif Gereja Katolik terhadap teori evolusi dan teori big bang, termasuk usaha Pierre Teilhard de Chardin menjelaskan tentang evolusi kesadaran manusia menuju Kristus kosmis, memperlihatkan perubahan agama menuju penerimaan akan keragaman pandangan dunia: agama, budaya dan ilmu pengetahuan. Perkembangan dunia evolusioner ini memperhadapkan berbagai pertanyaan etis tentang apa sumbangan agama. Salah satunya adalah kesadaran tentang bayangan. Kesadaran akan bayangan diperoleh dari agama yang mengajarkan bahwa manusia yang diciptakan Tuhan, sekalipun unik, adalah ciptaan yang fana dan terbatas. Bayangan adalah pembebasan agar manusia tidak tertambat pada materi. Ia adalah manusia yang rapuh yang merindukan berevolusi menuju Kristus sebagai gambaran manusia yang sempurna.


2016 ◽  
Vol 113 (16) ◽  
pp. 4530-4535 ◽  
Author(s):  
Bill Thompson ◽  
Simon Kirby ◽  
Kenny Smith

A central debate in cognitive science concerns the nativist hypothesis, the proposal that universal features of behavior reflect a biologically determined cognitive substrate: For example, linguistic nativism proposes a domain-specific faculty of language that strongly constrains which languages can be learned. An evolutionary stance appears to provide support for linguistic nativism, because coordinated constraints on variation may facilitate communication and therefore be adaptive. However, language, like many other human behaviors, is underpinned by social learning and cultural transmission alongside biological evolution. We set out two models of these interactions, which show how culture can facilitate rapid biological adaptation yet rule out strong nativization. The amplifying effects of culture can allow weak cognitive biases to have significant population-level consequences, radically increasing the evolvability of weak, defeasible inductive biases; however, the emergence of a strong cultural universal does not imply, nor lead to, nor require, strong innate constraints. From this we must conclude, on evolutionary grounds, that the strong nativist hypothesis for language is false. More generally, because such reciprocal interactions between cultural and biological evolution are not limited to language, nativist explanations for many behaviors should be reconsidered: Evolutionary reasoning shows how we can have cognitively driven behavioral universals and yet extreme plasticity at the level of the individual—if, and only if, we account for the human capacity to transmit knowledge culturally. Wherever culture is involved, weak cognitive biases rather than strong innate constraints should be the default assumption.


2009 ◽  
Vol 32 (6) ◽  
pp. 538-539
Author(s):  
Yorick Wilks

AbstractNothing in McKay & Dennett's (M&D's) target article deals with the issue of how the adaptivity, or some other aspect, of beliefs might become a biological adaptation; which is to say, how the functions discussed might be coded in such a way in the brain that their development was also coded in gametes or sex transmission cells.


2020 ◽  
Vol 36 (1) ◽  
pp. 44-55
Author(s):  
Kyoungwon Seo ◽  
Hokyoung Ryu ◽  
Jieun Kim

Abstract. The limitations of self-report questionnaires and interview methods for assessing individual differences in human cognitive biases have become increasingly apparent. These limitations have led to a renewed interest in alternative modes of assessment, including for implicit and explicit aspects of human behavior (i.e., dual-process theory). Acknowledging this, the present study was conducted to develop and validate a serious game, “Don Quixote,” for measuring specific cognitive biases: the bandwagon effect and optimism bias. We hypothesized that the implicit and explicit game data would mirror the results from an interview and questionnaire, respectively. To examine this hypothesis, participants ( n = 135) played the serious game and completed a questionnaire and interview in a random order for cross-validation. The results demonstrated that the implicit game data (e.g., response time) were highly correlated with the interview data. On the contrary, the explicit game data (e.g., game score) were comparable to the results from the questionnaire. These findings suggest that the serious game and the underlying intrinsic nature of its game mechanics (i.e., evoking instant responses under time pressure) are of importance for the further development of cognitive bias measures in both academia and practice.


2014 ◽  
Vol 120 (1) ◽  
pp. 160-171 ◽  
Author(s):  
Rebecca D. Minehart ◽  
Jenny Rudolph ◽  
May C. M. Pian-Smith ◽  
Daniel B. Raemer

Abstract Background: Although feedback conversations are an essential component of learning, three challenges make them difficult: the fear that direct task feedback will harm the relationship with the learner, overcoming faculty cognitive biases that interfere with their eliciting the frames that drive trainees’ performances, and time pressure. Decades of research on developmental conversations suggest solutions to these challenges: hold generous inferences about learners, subject one’s own thinking to test by making it public, and inquire directly about learners’ cognitive frames. Methods: The authors conducted a randomized, controlled trial to determine whether a 1-h educational intervention for anesthesia faculty improved feedback quality in a simulated case. The primary outcome was an analysis of the feedback conversation between faculty and a simulated resident (actor) by using averages of six elements of a Behaviorally Anchored Rating Scale and an objective structured assessment of feedback. Seventy-one Harvard faculty anesthesiologists from five academic hospitals participated. Results: The intervention group scored higher when averaging all ratings. Scores for individual elements showed that the intervention group performed better in maintaining a psychologically safe environment (4.3 ± 1.21 vs. 3.8 ± 1.16; P = 0.001), identifying and exploring performance gaps (4.1 ± 1.38 vs. 3.7 ± 1.34; P = 0.048), and they more frequently emphasized the professionalism error of failing to call for help over the clinical topic of anaphylaxis (66 vs. 41%; P = 0.008). Conclusions: Quality of faculty feedback to a simulated resident was improved in the interventional group in a number of areas after a 1-h educational intervention, and this short intervention allowed a group of faculty to overcome enough discomfort in addressing a professionalism lapse to discuss it directly.


Sign in / Sign up

Export Citation Format

Share Document