emotion classification
Recently Published Documents


TOTAL DOCUMENTS

807
(FIVE YEARS 406)

H-INDEX

28
(FIVE YEARS 9)

2022 ◽  
Author(s):  
Aadam ◽  
Abdallah Tubaishat ◽  
Feras Al-Obeidat ◽  
Zahid Halim ◽  
Muhammad Waqas ◽  
...  

2022 ◽  
Vol 2022 ◽  
pp. 1-12
Author(s):  
Ran Li ◽  
Yuanfei Zhang ◽  
Lihua Yin ◽  
Zhe Sun ◽  
Zheng Lin ◽  
...  

Emotion lexicon is an important auxiliary resource for text emotion analysis. Previous works mainly focused on positive and negative classification and less on fine-grained emotion classification. Researchers use lexicon-based methods to find that patients with depression express more negative emotions on social media. Emotional characteristics are an effective feature in detecting depression, but the traditional emotion lexicon has limitations in detecting depression and ignores many depression words. Therefore, we build an emotion lexicon for depression to further study the differences between healthy users and patients with depression. The experimental results show that the depression lexicon constructed in this paper is effective and has a better effect of classifying users with depression.


2022 ◽  
Vol 12 ◽  
Author(s):  
Jianlan Wen ◽  
Yuming Piao

African literature has played a major role in changing and shaping perceptions about African people and their way of life for the longest time. Unlike western cultures that are associated with advanced forms of writing, African literature is oral in nature, meaning it has to be recited and even performed. Although Africa has an old tribal culture, African philosophy is a new and strange idea among us. Although the problem of “universality” of African philosophy actually refers to the question of whether Africa has heckling of philosophy in the Western sense, obviously, the philosophy bred by Africa’s native culture must be acknowledged. Therefore, the human–computer interaction-oriented (HCI-oriented) method is proposed to appreciate African literature and African philosophy. To begin with, a physical object of tablet-aid is designed, and a depth camera is used to track the user’s hand and tablet-aid and then map them to the virtual scene, respectively. Then, a tactile redirection method is proposed to meet the user’s requirement of tactile consistency in head-mounted display virtual reality environment. Finally, electroencephalogram (EEG) emotion recognition, based on multiscale convolution kernel convolutional neural networks, is proposed to appreciate the reflection of African philosophy in African literature. The experimental results show that the proposed method has a strong immersion and a good interactive experience in navigation, selection, and manipulation. The proposed HCI method is not only easy to use, but also improves the interaction efficiency and accuracy during appreciation. In addition, the simulation of EEG emotion recognition reveals that the accuracy of emotion classification in 33-channel is 90.63%, almost close to the accuracy of the whole channel, and the proposed algorithm outperforms three baselines with respect to classification accuracy.


2022 ◽  
Vol 2 (1) ◽  
Author(s):  
Xin Xiao ◽  
Chaoyang Fang ◽  
Hui Lin ◽  
Li Liu ◽  
Ya Tian ◽  
...  

AbstractIn the Internet age, emotions exist in cyberspace and geospatial space, and social media is the mapping from geospatial space to cyberspace. However, most previous studies pay less attention to the multidimensional and spatiotemporal characteristics of emotion. We obtained 211,526 Sina Weibo data with geographic locations and trained an emotion classification model by combining the Bidirectional Encoder Representation from Transformers (BERT) model and a convolutional neural network to calculate the emotional tendency of each Weibo. Then, the topic of the hot spots in Nanchang City was detected through a word shift graph, and the temporal and spatial change characteristics of the Weibo emotions were analyzed at the grid-scale. The results of our research show that Weibo’s overall emotion tendencies are mainly positive. The spatial distribution of the urban emotions is extremely uneven, and the hot spots of a single emotion are mainly distributed around the city. In general, the intensity of the temporal and spatial changes in emotions in the cities is relatively high. Specifically, from day to night, the city exhibits a pattern of high in the east and low in the west. From working days to weekends, the model exhibits a low center and a four-week high. These results reveal the temporal and spatial distribution characteristics of the Weibo emotions in the city and provide auxiliary support for analyzing the happiness of residents in the city and guiding urban management and planning.


2022 ◽  
Vol 70 (3) ◽  
pp. 6365-6380
Author(s):  
Chinu Singla ◽  
Fahd N. Al-Wesabi ◽  
Yash Singh Pathania ◽  
Badria Sulaiman Alfurhood ◽  
Anwer Mustafa Hilal ◽  
...  

2022 ◽  
pp. 1146-1156
Author(s):  
Revathi A. ◽  
Sasikaladevi N.

This chapter on multi speaker independent emotion recognition encompasses the use of perceptual features with filters spaced in Equivalent rectangular bandwidth (ERB) and BARK scale and vector quantization (VQ) classifier for classifying groups and artificial neural network with back propagation algorithm for emotion classification in a group. Performance can be improved by using the large amount of data in a pertinent emotion to adequately train the system. With the limited set of data, this proposed system has provided consistently better accuracy for the perceptual feature with critical band analysis done in ERB scale.


2022 ◽  
Vol 71 ◽  
pp. 103235
Author(s):  
MaoSong Yan ◽  
Zhen Deng ◽  
BingWei He ◽  
ChengSheng Zou ◽  
Jie Wu ◽  
...  

IEEE Access ◽  
2022 ◽  
pp. 1-1
Author(s):  
Iqra Ameer ◽  
Grigori Sidorov ◽  
Helena Gomez-Adorno ◽  
Rao Muhammad Adeel Nawab

2021 ◽  
Vol 7 ◽  
pp. e804
Author(s):  
Marcos Fernández Carbonell ◽  
Magnus Boman ◽  
Petri Laukka

We investigated emotion classification from brief video recordings from the GEMEP database wherein actors portrayed 18 emotions. Vocal features consisted of acoustic parameters related to frequency, intensity, spectral distribution, and durations. Facial features consisted of facial action units. We first performed a series of person-independent supervised classification experiments. Best performance (AUC = 0.88) was obtained by merging the output from the best unimodal vocal (Elastic Net, AUC = 0.82) and facial (Random Forest, AUC = 0.80) classifiers using a late fusion approach and the product rule method. All 18 emotions were recognized with above-chance recall, although recognition rates varied widely across emotions (e.g., high for amusement, anger, and disgust; and low for shame). Multimodal feature patterns for each emotion are described in terms of the vocal and facial features that contributed most to classifier performance. Next, a series of exploratory unsupervised classification experiments were performed to gain more insight into how emotion expressions are organized. Solutions from traditional clustering techniques were interpreted using decision trees in order to explore which features underlie clustering. Another approach utilized various dimensionality reduction techniques paired with inspection of data visualizations. Unsupervised methods did not cluster stimuli in terms of emotion categories, but several explanatory patterns were observed. Some could be interpreted in terms of valence and arousal, but actor and gender specific aspects also contributed to clustering. Identifying explanatory patterns holds great potential as a meta-heuristic when unsupervised methods are used in complex classification tasks.


Sign in / Sign up

Export Citation Format

Share Document