“Mind full or mindful” – can mere cognitive busyness lead to compliance similar to an emotional seesaw?

2019 ◽  
Vol 14 (3-4) ◽  
pp. 117-132
Author(s):  
Magdalena C. Kaczmarek ◽  
Melanie C. Steffens
Keyword(s):  
2004 ◽  
Author(s):  
Natalie Hall ◽  
Richard Crisp ◽  
Ifat Rauf ◽  
Terry Eskenazi-Behar ◽  
Russell Hutter ◽  
...  

Author(s):  
Eliezer Yudkowsky

By far the greatest danger of Artificial Intelligence (AI) is that people conclude too early that they understand it. Of course, this problem is not limited to the field of AI. Jacques Monod wrote: ‘A curious aspect of the theory of evolution is that everybody thinks he understands it’ (Monod, 1974). The problem seems to be unusually acute in Artificial Intelligence. The field of AI has a reputation for making huge promises and then failing to deliver on them. Most observers conclude that AI is hard, as indeed it is. But the embarrassment does not stem from the difficulty. It is difficult to build a star from hydrogen, but the field of stellar astronomy does not have a terrible reputation for promising to build stars and then failing. The critical inference is not that AI is hard, but that, for some reason, it is very easy for people to think they know far more about AI than they actually do. It may be tempting to ignore Artificial Intelligence because, of all the global risks discussed in this book, AI is probably hardest to discuss. We cannot consult actuarial statistics to assign small annual probabilities of catastrophe, as with asteroid strikes. We cannot use calculations from a precise, precisely confirmed model to rule out events or place infinitesimal upper bounds on their probability, as with proposed physics disasters. But this makes AI catastrophes more worrisome, not less. The effect of many cognitive biases has been found to increase with time pressure, cognitive busyness, or sparse information. Which is to say that the more difficult the analytic challenge, the more important it is to avoid or reduce bias. Therefore I strongly recommend reading my other chapter (Chapter 5) in this book before continuing with this chapter. When something is universal enough in our everyday lives, we take it for granted to the point of forgetting it exists. Imagine a complex biological adaptation with ten necessary parts. If each of the ten genes is independently at 50% frequency in the gene pool – each gene possessed by only half the organisms in that species – then, on average, only 1 in 1024 organisms will possess the full, functioning adaptation.


2016 ◽  
Vol 17 (2) ◽  
pp. 155-179
Author(s):  
Nicholas A. Palomares ◽  
Katherine Grasso ◽  
Siyue Li ◽  
Na Li

Abstract An experiment examined goal understanding and how perceivers’ suspiciousness was associated with the accuracy, valence, and certainty of their inferences about a pursuer’s goal. In initial interactions, one dyad member was randomly assigned as the pursuer, and the other was the perceiver. The congruency of the perceiver’s and the pursuer’s conversation goals (i.e., discordant, identical, or concordant) and the perceiver’s cognitive busyness were manipulated. Results confirmed that accuracy decreased as perceivers’ suspiciousness increased only for not-busy perceivers in the goal-discord condition because perceivers’ inferences were negatively valenced. Results also supported the hypotheses that certainty decreased as perceivers’ suspiciousness increased only for not-busy perceivers in the goal-discord condition and that certainty increased as perceivers’ suspiciousness increased both for not-busy perceivers in the identical-goal condition and for busy perceivers in the goal-discord condition.


2006 ◽  
Vol 146 (2) ◽  
pp. 253-256 ◽  
Author(s):  
Russell R. C. Hutter ◽  
Richard J. Crisp
Keyword(s):  

2004 ◽  
Vol 144 (5) ◽  
pp. 541-544 ◽  
Author(s):  
Richard J. Crisp ◽  
Natasha Perks ◽  
Catriona H. Stone ◽  
Matthew J. Farr
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document