Emotional thinking as the foundation of consciousness in artificial intelligence

2021 ◽  
pp. 209660832110526
Author(s):  
Zhenhua Zhou

Current theories of artificial intelligence (AI) generally exclude human emotions. The idea at the core of such theories could be described as ‘cognition is computing’; that is, that human psychological and symbolic representations and the operations involved in structuring such representations in human thinking and intelligence can be converted by AI into a series of cognitive symbolic representations and calculations in a manner that simulates human intelligence. However, after decades of development, the cognitive computing doctrine has encountered many difficulties, both in theory and in practice; in particular, it is far from approaching real human intelligence. Real human intelligence runs through the whole process of the emotions. The core and motivation of rational thinking are derived from the emotions. Intelligence without emotion neither exists nor is meaningful. For example, the idea of ‘hot thinking’ proposed by Paul Thagard, a philosopher of cognitive science, discusses the mechanism of the emotions in human cognition and the thinking process. Through an analysis from the perspectives of cognitive neurology, cognitive psychology and social anthropology, this article notes that there may be a type of thinking that could be called ‘emotional thinking’. This type of thinking includes complex emotional factors during the cognitive processes. The term is used to refer to the capacity to process information and use emotions to integrate information in order to arrive at the right decisions and reactions. This type of thinking can be divided into two types according to the role of cognition: positive and negative emotional thinking. That division reflects opposite forces in the cognitive process. In the future, ‘emotional computing’ will cause an important acceleration in the development of AI consciousness. The foundation of AI consciousness is emotional computing based on the simulation of emotional thinking.

In this chapter, the author presents a brief history of artificial intelligence (AI) and cognitive computing (CC). They are often interchangeable terms to many people who are not working in the technology industry. Both imply that computers are now responsible for performing job functions that a human used to perform. The two topics are closely aligned; while they are not mutually exclusive, both have distinctive purposes and applications due to their practical, industrial, and commercial appeal as well as their respective challenges amongst academia, engineering, and research communities. To summarise, AI empowers computer systems to be smart (and perhaps smarter than humans). Conversely, CC includes individual technologies that perform specific tasks that facilitate and augment human intelligence. When the benefits of both AI and CC are combined within a single system, operating from the same sets of data and the same real-time variables, they have the potential to enrich humans, society, and our world.


Author(s):  
Shane T. Mueller

Modern artificial intelligence (AI) image classifiers have made impressive advances in recent years, but their performance often appears strange or violates expectations of users. This suggests that humans engage in cognitive anthropomorphism: expecting AI to have the same nature as human intelligence. This mismatch presents an obstacle to appropriate human-AI interaction. To delineate this mismatch, I examine known properties of human classification, in comparison with image classifier systems. Based on this examination, I offer three strategies for system design that can address the mismatch between human and AI classification: explainable AI, novel methods for training users, and new algorithms that match human cognition.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Yanyan Dong ◽  
Jie Hou ◽  
Ning Zhang ◽  
Maocong Zhang

Artificial intelligence (AI) is essentially the simulation of human intelligence. Today’s AI can only simulate, replace, extend, or expand part of human intelligence. In the future, the research and development of cutting-edge technologies such as brain-computer interface (BCI) together with the development of the human brain will eventually usher in a strong AI era, when AI can simulate and replace human’s imagination, emotion, intuition, potential, tacit knowledge, and other kinds of personalized intelligence. Breakthroughs in algorithms represented by cognitive computing promote the continuous penetration of AI into fields such as education, commerce, and medical treatment to build up AI service space. As to human concern, namely, who controls whom between humankind and intelligent machines, the answer is that AI can only become a service provider for human beings, demonstrating the value rationality of following ethics.


1997 ◽  
Vol 20 (4) ◽  
pp. 758-763
Author(s):  
Dana H. Ballard ◽  
Mary M. Hayhoe ◽  
Polly K. Pook ◽  
Rajesh P. N. Rao

The majority of commentators agree that the time to focus on embodiment has arrived and that the disembodied approach that was taken from the birth of artificial intelligence is unlikely to provide a satisfactory account of the special features of human intelligence. In our Response, we begin by addressing the general comments and criticisms directed at the emerging enterprise of deictic and embodied cognition. In subsequent sections we examine the topics that constitute the core of the commentaries: embodiment mechanisms, dorsal and ventral visual processing, eye movements, and learning.


2021 ◽  

This publication contains studies conducted by authors from several European countries that have cooperated with each other for many years in the field of human rights. The fruit of this cooperation are numerous conferences and publications in various languages. What is most important, however, is the exchange of experiences and opinions on understanding and application of individual human rights from the perspective of the experiences of societies living in the European cultural circle, and at the same time functioning in different historical and geographical conditions. This publication is an attempt to look at human rights from the perspective of the dynamic progress that is connected with the development of ICT tools. It is not only about digitization or automation of human work, but above all about creating a virtual society, in which artificial intelligence plays an important role. A significant part of human activity, especially interpersonal communication, takes place with the use of social media. Moreover, individual contact with public authorities are being gradually replaced by intelligent computer programs. In the United States, there is already an IT system, which adjudicates in minor misdemeanor cases. Modern researches in IT sector aim to build programs that allow to support human thinking through recommendation algorithms or suggesting automatically learned solutions, and even aim at autonomous decision-making. This last level of shifting responsibility for decisions to artificial intelligence is assessed extremely positive by many people, but also brings many fears. A virtual society built with the use of artificial intelligence changes the perception of many human rights, such as the right to good name, the right to freely express one’s opinion, the right to property, the right to state or national identity. Hence this publication contains various opinions on the artificial intelligence and its role in the functioning of society and importance for the life of an individual. The added value of this publication is the fact that it contains balanced views and assessments of authors from various European countries and academic societies conducting research on digital reality. This publication will certainly allow the reader to form his or her own opinion on human rights in the context of artificial intelligence.


2016 ◽  
Vol 699 ◽  
pp. 104-109
Author(s):  
Simion Haragâş ◽  
Iuliu Negrean ◽  
Dumitru Pop ◽  
Ovidiu Buiga ◽  
Florina Rusu

At the final step of the injection process (of a plastic product with cavities) when the part is on the point to be ejected the adhesion phenomena occurs between the core and the plastic part. The mold adhesion effect has a significant influence on the mold design ejection system and over the whole process. This paper presents a computation methodology of the demolding moment for two cases of plastic injected parts with internal trapezoidal thread. Knowing the value of this moment offer us the possibility to adopt the right design solution of the ejector system and of the entire mold. As further work the author will try to validate this method through a set of practical experiments.


Author(s):  
Leonardo Ranaldi ◽  
Fabio Massimo Zanzotto

Documenting cultural heritage by using artificial intelligence (AI) is crucial for preserving the memory of the past and a key point for future knowledge. However, modern AI technologies make use of statistical learners that lead to self-empiricist logic, which, unlike human minds, use learned non-symbolic representations. Nevertheless, it seems that it is not the right way to progress in AI. If we want to rely on AI for these tasks, it is essential to understand what lies behind these models. Among the ways to discover AI there are the senses and the intellect. We could consider AI as an intelligence. Intelligence has an essence, but we do not know whether it can be considered “something” or “someone”. Important issues in the analysis of AI concern the structure of symbols -operations with which the intellectual solution is carried out- and the search for strategic reference points, aspiring to create models with human-like intelligence. For many years, humans, seeing language as innate, have carried out symbolic theories. Everything seems to have skipped with the advent of Machine Learning. In this paper, after a long analysis of history, the rule-based and the learning-based vision, we propose KERMIT as a unit of investigation for a possible meeting point between the different learning theories. Finally, we propose a new vision of knowledge in AI models based on a combination of rules, learning and human knowledge.


2019 ◽  
Vol 10 (4) ◽  
Author(s):  
Vasily Nekrasov

The paper investigates the essence, concept and distinctive features of artificial intelligence. The author notes that currently there is no unified position not only in understanding the essence of this phenomenon, but also its capabilities. The analysis of the given definitions of the analyzed phenomenon allows to draw a number of conclusions. First, the artificial intelligence should not be confused with the so-called "natural intelligence" or human intelligence. Secondly, in understanding the essence of the artificial intelligence, various components of human thinking are included, some where it is creative activity, somewhere it is intellectual activity, etc. At the same time, this is important to understand the essence of the concept under consideration. Thirdly, it is necessary to identify types of the artificial intelligence. So, it is accepted to allocate strong and weak systems of artificial intelligence. So, it is accustomed to specify strong and week systems of the artificial intellect. The article substantiates that when the domestic legislator constructs norms on crimes related to the artificial intelligence, they should be based on a refutable presumption of its (artificial intelligence) public danger.


2021 ◽  
Vol 4 ◽  
Author(s):  
J. E. (Hans). Korteling ◽  
G. C. van de Boer-Visschedijk ◽  
R. A. M. Blankendaal ◽  
R. C. Boonekamp ◽  
A. R. Eikelboom

AI is one of the most debated subjects of today and there seems little common understanding concerning the differences and similarities of human intelligence and artificial intelligence. Discussions on many relevant topics, such as trustworthiness, explainability, and ethics are characterized by implicit anthropocentric and anthropomorphistic conceptions and, for instance, the pursuit of human-like intelligence as the golden standard for Artificial Intelligence. In order to provide more agreement and to substantiate possible future research objectives, this paper presents three notions on the similarities and differences between human- and artificial intelligence: 1) the fundamental constraints of human (and artificial) intelligence, 2) human intelligence as one of many possible forms of general intelligence, and 3) the high potential impact of multiple (integrated) forms of narrow-hybrid AI applications. For the time being, AI systems will have fundamentally different cognitive qualities and abilities than biological systems. For this reason, a most prominent issue is how we can use (and “collaborate” with) these systems as effectively as possible? For what tasks and under what conditions, decisions are safe to leave to AI and when is human judgment required? How can we capitalize on the specific strengths of human- and artificial intelligence? How to deploy AI systems effectively to complement and compensate for the inherent constraints of human cognition (and vice versa)? Should we pursue the development of AI “partners” with human (-level) intelligence or should we focus more at supplementing human limitations? In order to answer these questions, humans working with AI systems in the workplace or in policy making have to develop an adequate mental model of the underlying ‘psychological’ mechanisms of AI. So, in order to obtain well-functioning human-AI systems, Intelligence Awareness in humans should be addressed more vigorously. For this purpose a first framework for educational content is proposed.


Author(s):  
Ashok K. Goel

AbstractResearch on design and analysis of complex systems has led to many functional representations with several meanings of function. This work on conceptual design uses a family of representations called structure–behavior–function (SBF) models. The SBF family ranges from behavior–function models of abstract design patterns to drawing–shape–SBF models that couple SBF models with visuospatial knowledge of technological systems. Development of SBF modeling is an instance of cognitively oriented artificial intelligence research that seeks to understand human cognition and build intelligent agents for addressing complex tasks such as design. This paper first traces the development of SBF modeling as our perspective on design evolved from that of problem solving to that of memory and learning. Next, the development of SBF modeling as a case study is used to abstract some of the core principles of an artificial intelligence methodology for functional modeling. Finally, some implications of the artificial intelligence methodology for different meanings of function are examined.


Sign in / Sign up

Export Citation Format

Share Document