The importance of motivation and emotion for explaining human cognition

2017 ◽  
Vol 40 ◽  
Author(s):  
C. Dominik Güss ◽  
Dietrich Dörner

AbstractLake et al. discuss building blocks of human intelligence that are quite different from those of artificial intelligence. We argue that a theory of human intelligence has to incorporate human motivations and emotions. The interaction of motivation, emotion, and cognition is the real strength of human intelligence and distinguishes it from artificial intelligence.

Author(s):  
Shane T. Mueller

Modern artificial intelligence (AI) image classifiers have made impressive advances in recent years, but their performance often appears strange or violates expectations of users. This suggests that humans engage in cognitive anthropomorphism: expecting AI to have the same nature as human intelligence. This mismatch presents an obstacle to appropriate human-AI interaction. To delineate this mismatch, I examine known properties of human classification, in comparison with image classifier systems. Based on this examination, I offer three strategies for system design that can address the mismatch between human and AI classification: explainable AI, novel methods for training users, and new algorithms that match human cognition.


2021 ◽  
Vol 4 ◽  
Author(s):  
J. E. (Hans). Korteling ◽  
G. C. van de Boer-Visschedijk ◽  
R. A. M. Blankendaal ◽  
R. C. Boonekamp ◽  
A. R. Eikelboom

AI is one of the most debated subjects of today and there seems little common understanding concerning the differences and similarities of human intelligence and artificial intelligence. Discussions on many relevant topics, such as trustworthiness, explainability, and ethics are characterized by implicit anthropocentric and anthropomorphistic conceptions and, for instance, the pursuit of human-like intelligence as the golden standard for Artificial Intelligence. In order to provide more agreement and to substantiate possible future research objectives, this paper presents three notions on the similarities and differences between human- and artificial intelligence: 1) the fundamental constraints of human (and artificial) intelligence, 2) human intelligence as one of many possible forms of general intelligence, and 3) the high potential impact of multiple (integrated) forms of narrow-hybrid AI applications. For the time being, AI systems will have fundamentally different cognitive qualities and abilities than biological systems. For this reason, a most prominent issue is how we can use (and “collaborate” with) these systems as effectively as possible? For what tasks and under what conditions, decisions are safe to leave to AI and when is human judgment required? How can we capitalize on the specific strengths of human- and artificial intelligence? How to deploy AI systems effectively to complement and compensate for the inherent constraints of human cognition (and vice versa)? Should we pursue the development of AI “partners” with human (-level) intelligence or should we focus more at supplementing human limitations? In order to answer these questions, humans working with AI systems in the workplace or in policy making have to develop an adequate mental model of the underlying ‘psychological’ mechanisms of AI. So, in order to obtain well-functioning human-AI systems, Intelligence Awareness in humans should be addressed more vigorously. For this purpose a first framework for educational content is proposed.


2020 ◽  
pp. 48-69
Author(s):  
Daeyeol Lee

Compared to the human brain, current artificial intelligence technology is limited in that its goals are determined by human developers and users. Similarly, despite their superficial similarities, modern-day computers and human brains have many differences. Building blocks of human brain that are functionally equivalent to transistors, functional units of digital computers, have not been identified, and we do not know whether hardware and software are separable in the human brain. This chapter uses Mars rovers as a case study to illustrate the autonomy of intelligent robots, because machines dependent on human intelligence is not genuinely intelligent.


2021 ◽  
pp. 209660832110526
Author(s):  
Zhenhua Zhou

Current theories of artificial intelligence (AI) generally exclude human emotions. The idea at the core of such theories could be described as ‘cognition is computing’; that is, that human psychological and symbolic representations and the operations involved in structuring such representations in human thinking and intelligence can be converted by AI into a series of cognitive symbolic representations and calculations in a manner that simulates human intelligence. However, after decades of development, the cognitive computing doctrine has encountered many difficulties, both in theory and in practice; in particular, it is far from approaching real human intelligence. Real human intelligence runs through the whole process of the emotions. The core and motivation of rational thinking are derived from the emotions. Intelligence without emotion neither exists nor is meaningful. For example, the idea of ‘hot thinking’ proposed by Paul Thagard, a philosopher of cognitive science, discusses the mechanism of the emotions in human cognition and the thinking process. Through an analysis from the perspectives of cognitive neurology, cognitive psychology and social anthropology, this article notes that there may be a type of thinking that could be called ‘emotional thinking’. This type of thinking includes complex emotional factors during the cognitive processes. The term is used to refer to the capacity to process information and use emotions to integrate information in order to arrive at the right decisions and reactions. This type of thinking can be divided into two types according to the role of cognition: positive and negative emotional thinking. That division reflects opposite forces in the cognitive process. In the future, ‘emotional computing’ will cause an important acceleration in the development of AI consciousness. The foundation of AI consciousness is emotional computing based on the simulation of emotional thinking.


2021 ◽  
pp. jim-2021-001870
Author(s):  
Therese L Canares ◽  
Weiyao Wang ◽  
Mathias Unberath ◽  
James H Clark

AI relates broadly to the science of developing computer systems to imitate human intelligence, thus allowing for the automation of tasks that would otherwise necessitate human cognition. Such technology has increasingly demonstrated capacity to outperform humans for functions relating to image recognition. Given the current lack of cost-effective confirmatory testing, accurate diagnosis and subsequent management depend on visual detection of characteristic findings during otoscope examination. The aim of this manuscript is to perform a comprehensive literature review and evaluate the potential application of artificial intelligence for the diagnosis of ear disease from otoscopic image analysis.


2019 ◽  
Vol 24 (2) ◽  
pp. 241-258
Author(s):  
Paul Dumouchel

The idea of artificial intelligence implies the existence of a form of intelligence that is “natural,” or at least not artificial. The problem is that intelligence, whether “natural” or “artificial,” is not well defined: it is hard to say what, exactly, is or constitutes intelligence. This difficulty makes it impossible to measure human intelligence against artificial intelligence on a unique scale. It does not, however, prevent us from comparing them; rather, it changes the sense and meaning of such comparisons. Comparing artificial intelligence with human intelligence could allow us to understand both forms better. This paper thus aims to compare and distinguish these two forms of intelligence, focusing on three issues: forms of embodiment, autonomy and judgment. Doing so, I argue, should enable us to have a better view of the promises and limitations of present-day artificial intelligence, along with its benefits and dangers and the place we should make for it in our culture and society.


Author(s):  
J Ph Guillet ◽  
E Pilon ◽  
Y Shimizu ◽  
M S Zidi

Abstract This article is the first of a series of three presenting an alternative method of computing the one-loop scalar integrals. This novel method enjoys a couple of interesting features as compared with the method closely following ’t Hooft and Veltman adopted previously. It directly proceeds in terms of the quantities driving algebraic reduction methods. It applies to the three-point functions and, in a similar way, to the four-point functions. It also extends to complex masses without much complication. Lastly, it extends to kinematics more general than that of the physical, e.g., collider processes relevant at one loop. This last feature may be useful when considering the application of this method beyond one loop using generalized one-loop integrals as building blocks.


Sign in / Sign up

Export Citation Format

Share Document