scholarly journals Towards a user-oriented adaptive system based on sentiment analysis from text

2021 ◽  
Vol 297 ◽  
pp. 01010
Author(s):  
Adil Baqach ◽  
Amal Battou

Sentiment analysis has known a big interest over recent years due to the expansion of data. It has many applications in different fields such as marketing, psychology, human-computer interaction, eLearning, etc. There are many forms of sentiment analysis, namely facial expressions, speech, and text. This article is more interested in sentiment analysis from the text as it is a relatively new field and still needs more effort and research. Sentiment analysis from text is very important for different fields, for eLearning it can be critical in determining the emotional state of students and therefore, putting in place the necessary interactions to motivate students to engage and complete their courses. In this article, we present different methods of sentiment analysis from the text that exist in the literature, beginning from the selection of features or text representation, until the training of the prediction model using either supervised or unsupervised learning algorithms and although there has been so much work done in this domain, there is still effort that can be done to improve the performance and to do that we first need to review the recent methods and approaches put in place on this field and then try to discuss improvements in certain approaches or even proposing new approaches.

Author(s):  
Thorsten O. Zander ◽  
Laurens R. Krol

Brain-computer interfaces can provide an input channel from humans to computers that depends only on brain activity, bypassing traditional means of communication and interaction. This input channel can be used to send explicit commands, but also to provide implicit input to the computer. As such, the computer can obtain information about its user that not only bypasses, but also goes beyond what can be communicated using traditional means. In this form, implicit input can potentially provide significant improvements to human-computer interaction. This paper describes a selection of work done by Team PhyPA (Physiological Parameters for Adaptation) at the Technische Universität Berlin to use brain-computer interfacing to enrich human-computer interaction.


Author(s):  
Leandro Yukio Mano ◽  
Luana Okino Sawada ◽  
Jó Ueyama

Current studies in the field of Human-Computer Interaction highlight the relevance of emotional aspects while interacting with computers systems. It is believed that by allowing intelligent agents to identify user’s emotions it becomes possible to induce and arouse emotions itself in order to stimulate the users in their activities. One of the great challenges for researchers is to provide systems capable of recognizing, interpreting and reacting intelligently and sensitively to the emotions of the user. In this sense, we propose an affective interaction between the music player and the user that is based on emotional recognition through the interpretation of facial expressions. Therefore, this project aims to develop and evaluate a system that can identify the user’s emotional state and provide a persuasive mechanism to change it (with a case study in music player). Also, explore the flexible approach in persuasion through persuasive mechanisms that may vary between a music player, games and / or videos. Thus, throughout the study, the model based on the Classification Committee was efficient in identifying the basic emotions and the satisfaction of the users through the application with the music player.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Author(s):  
Yasin Görmez ◽  
◽  
Yunus E. Işık ◽  
Mustafa Temiz ◽  
Zafer Aydın

Sentiment analysis is the process of determining the attitude or the emotional state of a text automatically. Many algorithms are proposed for this task including ensemble methods, which have the potential to decrease error rates of the individual base learners considerably. In many machine learning tasks and especially in sentiment analysis, extracting informative features is as important as developing sophisticated classifiers. In this study, a stacked ensemble method is proposed for sentiment analysis, which systematically combines six feature extraction methods and three classifiers. The proposed method obtains cross-validation accuracies of 89.6%, 90.7% and 67.2% on large movie, Turkish movie and SemEval-2017 datasets, respectively, outperforming the other classifiers. The accuracy improvements are shown to be statistically significant at the 99% confidence level by performing a Z-test.


2021 ◽  
Author(s):  
Herdiantri Sufriyana ◽  
Yu Wei Wu ◽  
Emily Chia-Yu Su

Abstract We aimed to provide a resampling protocol for dimensional reduction resulting a few latent variables. The applicability focuses on but not limited for developing a machine learning prediction model in order to improve the number of sample size in relative to the number of candidate predictors. By this feature representation technique, one can improve generalization by preventing latent variables to overfit data used to conduct the dimensional reduction. However, this technique may warrant more computational capacity and time to conduct the procedure. The key stages consisted of derivation of latent variables from multiple resampling subsets, parameter estimation of latent variables in population, and selection of latent variables transformed by the estimated parameters.


Sign in / Sign up

Export Citation Format

Share Document