scholarly journals Effect of emotionally toned Malay language sounds on the brain: a NIRS analysis

2019 ◽  
Vol 5 (1) ◽  
pp. 1-7
Author(s):  
Muhammad Nur Adilin Mohd Anuardi ◽  
Atsuko K. Yamazaki

Speech recognition features such as emotion have always been involved in human communication. With the recent developments in the communication methods, researchers have investigated artificial and emotional intelligence to improve communication. This has led to the emergence of affective computing, which deals with processing information pertaining to human emotions. This study aims to determine positive influence of language sounds containing emotion on brain function for improved communication. Twenty-seven college-age Japanese subjects with no prior exposure to the Malay language listened to emotionally toned and emotionally neutral sounds in the Malay language. Their brain activities were measured using near-infrared spectroscopy (NIRS) as they listened to the sounds. A comparison between different NIRS signals revealed that emotionally toned language sounds had a greater impact on brain areas associated with attention and emotion. On the contrary, emotionally neutral Malay sounds affected brain areas involved in working memory and language processing. These results suggest that emotionally-charged sounds initiate listeners’ attention and emotion recognition even when the listeners do not understand the language. The ability to interpret emotions presents challenges in computer systems and robotics; therefore, we hope that our results can be used for the development of computational models of emotion for autonomous robot research in the field of communication.

2020 ◽  
Author(s):  
Beata Grzyb ◽  
Gabriella Vigliocco

Language has predominately been studied as a unimodal phenomenon - as speech or text without much consideration of its physical and social context – this is true both in cognitive psychology/psycholinguistics as well as in artificial intelligence. However, in everyday life, language is most often used in face-to-face communication and in addition to structured speech it comprises a dynamic system of multiplex components such as gestures, eye gaze, mouth movements and prosodic modulation. Recently, cognitive scientists have started to realise the potential importance of multimodality for the understanding of human communication and its neural underpinnings; while AI scientists have begun to address how to integrate multimodality in order to improve communication between human and artificial embodied agent. We review here the existing literature on multimodal language learning and processing in humans and the literature on perception of artificial agents, their comprehension and production of multimodal cues and we discuss their main limitations. We conclude by arguing that by joining forces AI scientists can improve the effectiveness of human-machine interaction and increase the human-likeness and acceptance of embodied agents in society. In turn, computational models that generate language in artificial embodied agents constitute a unique research tool to investigate the underlying mechanisms that govern language processing and learning in humans.


2010 ◽  
Vol 1 (1) ◽  
pp. 1-17 ◽  
Author(s):  
Joost Broekens

Affective computing has proven to be a viable field of research comprised of a large number of multidisciplinary researchers, resulting in work that is widely published. The majority of this work consists of emotion recognition technology, computational modeling of causal factors of emotion and emotion expression in virtual characters and robots. A smaller part is concerned with modeling the effects of emotion on cognition and behavior, formal modeling of cognitive appraisal theory and models of emergent emotions. Part of the motivation for affective computing as a field is to better understand emotion through computational modeling. In psychology, a critical and neglected aspect of having emotions is the experience of emotion: what does the content of an emotional episode look like, how does this content change over time, and when do we call the episode emotional. Few modeling efforts in affective computing have these topics as a primary focus. The launch of a journal on synthetic emotions should motivate research initiatives in this direction, and this research should have a measurable impact on emotion research in psychology. In this article, I show that a good way to do so is to investigate the psychological core of what an emotion is: an experience. I present ideas on how computational modeling of emotion can help to better understand the experience of motion, and provide evidence that several computational models of emotion already address the issue.


Author(s):  
Joost Broekens

Affective computing has proven to be a viable field of research comprised of a large number of multidisciplinary researchers, resulting in work that is widely published. The majority of this work consists of emotion recognition technology, computational modeling of causal factors of emotion and emotion expression in virtual characters and robots. A smaller part is concerned with modeling the effects of emotion on cognition and behavior, formal modeling of cognitive appraisal theory and models of emergent emotions. Part of the motivation for affective computing as a field is to better understand emotion through computational modeling. In psychology, a critical and neglected aspect of having emotions is the experience of emotion: what does the content of an emotional episode look like, how does this content change over time and when do we call the episode emotional. Few modeling efforts in affective computing have these topics as a primary focus. The launch of a journal on synthetic emotions should motivate research initiatives in this direction, and this research should have a measurable impact on emotion research in psychology. In this paper, I show that a good way to do so is to investigate the psychological core of what an emotion is: an experience. I present ideas on how computational modeling of emotion can help to better understand the experience of emotion, and provide evidence that several computational models of emotion already address the issue.


2021 ◽  
pp. 274-294
Author(s):  
Beata Grzyb ◽  
Gabriella Vigliocco

Recently, cognitive scientists have started to realise the potential importance of multimodality for the understanding of human communication and its neural underpinnings; while AI scientists have begun to address how to integrate multimodality in order to improve communication between human and embodied agent. We review here the existing literature on multimodal language learning and processing in humans and the literature on perception of embodied agents, their comprehension and production of multimodal cues and we discuss their main limitations. We conclude by arguing that by joining forces AI scientists can improve the effectiveness of human-machine interaction and increase the human-likeness and acceptance of embodied agents in society. In turn, computational models that generate language in artificial embodied agents constitute a unique research tool to investigate the underlying mechanisms that govern language processing and learning in humans.


2015 ◽  
Vol 2 (1) ◽  
pp. 129-149
Author(s):  
Mauricio Iza ◽  
Jesús Ezquerro

Research on the interaction between emotion, cognition and language in the field of Artificial Intelligence has become particularly active along the last years. Lots of computational models of emotion have been developed. There are accounts stressing the role of canonical and mirror neurons as underlying the use of nouns and verbs. At the same time, neuropsychology is developing new approaches for modeling language, emotion and cognition inspired on the insights gained from robotics. The current landscape is thus a promising collaboration between several approaches: Social Psychology, Neuropsychology, Artificial Intelligence (mainly embodied), and even Philosophy, so that each field provides useful cues for the common goal of understanding social interactions (including the interactions with machines).The aim of this paper is to analyze and asses the current trends in psychology and neuroscience for studying the mechanisms of the neurocomputational cognitive-affective architecture related to the conceptualization and use of language.


2020 ◽  
Author(s):  
Kun Sun

Expectations or predictions about upcoming content play an important role during language comprehension and processing. One important aspect of recent studies of language comprehension and processing concerns the estimation of the upcoming words in a sentence or discourse. Many studies have used eye-tracking data to explore computational and cognitive models for contextual word predictions and word processing. Eye-tracking data has previously been widely explored with a view to investigating the factors that influence word prediction. However, these studies are problematic on several levels, including the stimuli, corpora, statistical tools they applied. Although various computational models have been proposed for simulating contextual word predictions, past studies usually preferred to use a single computational model. The disadvantage of this is that it often cannot give an adequate account of cognitive processing in language comprehension. To avoid these problems, this study draws upon a massive natural and coherent discourse as stimuli in collecting the data on reading time. This study trains two state-of-art computational models (surprisal and semantic (dis)similarity from word vectors by linear discriminative learning (LDL)), measuring knowledge of both the syntagmatic and paradigmatic structure of language. We develop a `dynamic approach' to compute semantic (dis)similarity. It is the first time that these two computational models have been merged. Models are evaluated using advanced statistical methods. Meanwhile, in order to test the efficiency of our approach, one recently developed cosine method of computing semantic (dis)similarity based on word vectors data adopted is used to compare with our `dynamic' approach. The two computational and fixed-effect statistical models can be used to cross-verify the findings, thus ensuring that the result is reliable. All results support that surprisal and semantic similarity are opposed in the prediction of the reading time of words although both can make good predictions. Additionally, our `dynamic' approach performs better than the popular cosine method. The findings of this study are therefore of significance with regard to acquiring a better understanding how humans process words in a real-world context and how they make predictions in language cognition and processing.


2021 ◽  
Vol 11 ◽  
Author(s):  
Masaki Kaibori ◽  
Hisashi Kosaka ◽  
Kosuke Matsui ◽  
Morihiko Ishizaki ◽  
Hideyuki Matsushima ◽  
...  

Surgery with fluorescence equipment has improved to treat the malignant viscera, including hepatobiliary and pancreatic neoplasms. In both open and minimally invasive surgeries, optical imaging using near-infrared (NIR) fluorescence is used to assess anatomy and function in real time. Here, we review a variety of publications related to clinical applications of NIR fluorescence imaging in liver surgery. We have developed a novel nanoparticle (indocyanine green lactosome) that is biocompatible and can be used for imaging cancer tissues and also as a drug delivery system. To date, stable particles are formed in blood and have an ~10–20 h half-life. Particles labeled with a NIR fluorescent agent have been applied to cancer tissues by the enhanced permeability and retention effect in animals. Furthermore, this article reviews recent developments in photodynamic therapy with NIR fluorescence imaging, which may contribute and accelerate the innovative treatments for liver tumors.


Author(s):  
Sourajit Roy ◽  
Pankaj Pathak ◽  
S. Nithya

During the advent of the 21st century, technical breakthroughs and developments took place. Natural Language Processing or NLP is one of their promising disciplines that has been increasingly dynamic via groundbreaking findings on most computer networks. Because of the digital revolution the amounts of data generated by M2M communication across devices and platforms such as Amazon Alexa, Apple Siri, Microsoft Cortana, etc. were significantly increased. This causes a great deal of unstructured data to be processed that does not fit in with standard computational models. In addition, the increasing problems of language complexity, data variability and voice ambiguity make implementing models increasingly harder. The current study provides an overview of the potential and breadth of the NLP market and its acceptance in industry-wide, in particular after Covid-19. It also gives a macroscopic picture of progress in natural language processing research, development and implementation.


2021 ◽  
Vol 12 ◽  
Author(s):  
James Crum

Neuroimaging and neuropsychological methods have contributed much toward an understanding of the information processing systems of the human brain in the last few decades, but to what extent do cognitive neuroscientific findings represent and generalize to the inter- and intra-brain dynamics engaged in adapting to naturalistic situations? If it is not marked, and experimental designs lack ecological validity, then this stands to potentially impact the practical applications of a paradigm. In no other domain is this more important to acknowledge than in human clinical neuroimaging research, wherein reduced ecological validity could mean a loss in clinical utility. One way to improve the generalizability and representativeness of findings is to adopt a more “real-world” approach to the development and selection of experimental designs and neuroimaging techniques to investigate the clinically-relevant phenomena of interest. For example, some relatively recent developments to neuroimaging techniques such as functional near-infrared spectroscopy (fNIRS) make it possible to create experimental designs using naturalistic tasks that would otherwise not be possible within the confines of a conventional laboratory. Mental health, cognitive interventions, and the present challenges to investigating the brain during treatment are discussed, as well as how the ecological use of fNIRS might be helpful in bridging the explanatory gaps to understanding the cultivation of mental health.


Author(s):  
Nik Thompson ◽  
Tanya Jane McGill

This chapter discusses the domain of affective computing and reviews the area of affective tutoring systems: e-learning applications that possess the ability to detect and appropriately respond to the affective state of the learner. A significant proportion of human communication is non-verbal or implicit, and the communication of affective state provides valuable context and insights. Computers are for all intents and purposes blind to this form of communication, creating what has been described as an “affective gap.” Affective computing aims to eliminate this gap and to foster the development of a new generation of computer interfaces that emulate a more natural human-human interaction paradigm. The domain of learning is considered to be of particular note due to the complex interplay between emotions and learning. This is discussed in this chapter along with the need for new theories of learning that incorporate affect. Next, the more commonly applicable means for inferring affective state are identified and discussed. These can be broadly categorized into methods that involve the user’s input and methods that acquire the information independent of any user input. This latter category is of interest as these approaches have the potential for more natural and unobtrusive implementation, and it includes techniques such as analysis of vocal patterns, facial expressions, and physiological state. The chapter concludes with a review of prominent affective tutoring systems in current research and promotes future directions for e-learning that capitalize on the strengths of affective computing.


Sign in / Sign up

Export Citation Format

Share Document