semantic features
Recently Published Documents


TOTAL DOCUMENTS

1850
(FIVE YEARS 1060)

H-INDEX

35
(FIVE YEARS 6)

Author(s):  
Warih Maharani ◽  
Veronikha Effendy

<span lang="EN-US">The popularity of social media has drawn the attention of researchers who have conducted cross-disciplinary studies examining the relationship between personality traits and behavior on social media. Most current work focuses on personality prediction analysis of English texts, but Indonesian has received scant attention. Therefore, this research aims to predict user’s personalities based on Indonesian text from social media using machine learning techniques. This paper evaluates several machine learning techniques, including <a name="_Hlk87278444"></a>naive Bayes (NB), K-nearest neighbors (KNN), and support vector machine (SVM), based on semantic features including emotion, sentiment, and publicly available Twitter profile. We predict the personality based on the big five personality model, the most appropriate model for predicting user personality in social media. We examine the relationships between the semantic features and the Big Five personality dimensions. The experimental results indicate that the Big Five personality exhibit distinct emotional, sentimental, and social characteristics and that SVM outperformed NB and KNN for Indonesian. In addition, we observe several terms in Indonesian that specifically refer to each personality type, each of which has distinct emotional, sentimental, and social features.</span>


Author(s):  
Fan Xu ◽  
Yangjie Dan ◽  
Keyu Yan ◽  
Yong Ma ◽  
Mingwen Wang

Chinese dialects discrimination is a challenging natural language processing task due to scarce annotation resource. In this article, we develop a novel Chinese dialects discrimination framework with transfer learning and data augmentation (CDDTLDA) in order to overcome the shortage of resources. To be more specific, we first use a relatively larger Chinese dialects corpus to train a source-side automatic speech recognition (ASR) model. Then, we adopt a simple but effective data augmentation method (i.e., speed, pitch, and noise disturbance) to augment the target-side low-resource Chinese dialects, and fine-tune another target ASR model based on the previous source-side ASR model. Meanwhile, the potential common semantic features between source-side and target-side ASR models can be captured by using self-attention mechanism. Finally, we extract the hidden semantic representation in the target ASR model to conduct Chinese dialects discrimination. Our extensive experimental results demonstrate that our model significantly outperforms state-of-the-art methods on two benchmark Chinese dialects corpora.


2022 ◽  
Vol 16 (2) ◽  
pp. 1-26
Author(s):  
Riccardo Cantini ◽  
Fabrizio Marozzo ◽  
Giovanni Bruno ◽  
Paolo Trunfio

The growing use of microblogging platforms is generating a huge amount of posts that need effective methods to be classified and searched. In Twitter and other social media platforms, hashtags are exploited by users to facilitate the search, categorization, and spread of posts. Choosing the appropriate hashtags for a post is not always easy for users, and therefore posts are often published without hashtags or with hashtags not well defined. To deal with this issue, we propose a new model, called HASHET ( HAshtag recommendation using Sentence-to-Hashtag Embedding Translation ), aimed at suggesting a relevant set of hashtags for a given post. HASHET is based on two independent latent spaces for embedding the text of a post and the hashtags it contains. A mapping process based on a multi-layer perceptron is then used for learning a translation from the semantic features of the text to the latent representation of its hashtags. We evaluated the effectiveness of two language representation models for sentence embedding and tested different search strategies for semantic expansion, finding out that the combined use of BERT ( Bidirectional Encoder Representation from Transformer ) and a global expansion strategy leads to the best recommendation results. HASHET has been evaluated on two real-world case studies related to the 2016 United States presidential election and COVID-19 pandemic. The results reveal the effectiveness of HASHET in predicting one or more correct hashtags, with an average F -score up to 0.82 and a recommendation hit-rate up to 0.92. Our approach has been compared to the most relevant techniques used in the literature ( generative models , unsupervised models, and attention-based supervised models ) by achieving up to 15% improvement in F -score for the hashtag recommendation task and 9% for the topic discovery task.


2022 ◽  
Vol 40 (1) ◽  
pp. 1-29
Author(s):  
Siqing Li ◽  
Yaliang Li ◽  
Wayne Xin Zhao ◽  
Bolin Ding ◽  
Ji-Rong Wen

Citation count prediction is an important task for estimating the future impact of research papers. Most of the existing works utilize the information extracted from the paper itself. In this article, we focus on how to utilize another kind of useful data signal (i.e., peer review text) to improve both the performance and interpretability of the prediction models. Specially, we propose a novel aspect-aware capsule network for citation count prediction based on review text. It contains two major capsule layers, namely the feature capsule layer and the aspect capsule layer, with two different routing approaches, respectively. Feature capsules encode the local semantics from review sentences as the input of aspect capsule layer, whereas aspect capsules aim to capture high-level semantic features that will be served as final representations for prediction. Besides the predictive capacity, we also enhance the model interpretability with two strategies. First, we use the topic distribution of the review text to guide the learning of aspect capsules so that each aspect capsule can represent a specific aspect in the review. Then, we use the learned aspect capsules to generate readable text for explaining the predicted citation count. Extensive experiments on two real-world datasets have demonstrated the effectiveness of the proposed model in both performance and interpretability.


Author(s):  
Mohammadreza Samadi ◽  
Maryam Mousavian ◽  
Saeedeh Momtazi

Nowadays, broadcasting news on social media and websites has grown at a swifter pace, which has had negative impacts on both the general public and governments; hence, this has urged us to build a fake news detection system. Contextualized word embeddings have achieved great success in recent years due to their power to embed both syntactic and semantic features of textual contents. In this article, we aim to address the problem of the lack of fake news datasets in Persian by introducing a new dataset crawled from different news agencies, and propose two deep models based on the Bidirectional Encoder Representations from Transformers model (BERT), which is a deep contextualized pre-trained model for extracting valuable features. In our proposed models, we benefit from two different settings of BERT, namely pool-based representation, which provides a representation for the whole document, and sequence representation, which provides a representation for each token of the document. In the former one, we connect a Single Layer Perceptron (SLP) to the BERT to use the embedding directly for detecting fake news. The latter one uses Convolutional Neural Network (CNN) after the BERT’s embedding layer to extract extra features based on the collocation of words in a corpus. Furthermore, we present the TAJ dataset, which is a new Persian fake news dataset crawled from news agencies’ websites. We evaluate our proposed models on the newly provided TAJ dataset as well as the two different Persian rumor datasets as baselines. The results indicate the effectiveness of using deep contextualized embedding approaches for the fake news detection task. We also show that both BERT-SLP and BERT-CNN models achieve superior performance to the previous baselines and traditional machine learning models, with 15.58% and 17.1% improvement compared to the reported results by Zamani et al. [ 30 ], and 11.29% and 11.18% improvement compared to the reported results by Jahanbakhsh-Nagadeh et al. [ 9 ].


2022 ◽  
Vol 8 (1) ◽  
pp. 308-313
Author(s):  
T. Kabylov ◽  
A. Usmanova

The cognitive features of imperative utterances in the modern Kyrgyz language have been investigated. As you know, the imperative has two different meanings: one means urgency, necessity and importance, and the other - attempts to influence the actions of other people. Imperative means something extremely important or necessary. It also means order. The analyzed imperative statements are considered in the mainstream of conceptual grammar, which allows you to highlight syntactic concepts, or mental images that stand behind linguistic signs and are reflected in the analyzed syntactic structures. The purpose of the article is to identify and describe imperative utterances in the Kyrgyz language, to reveal the mechanisms for changing the prototypical meaning of the verb lexicon in the structure of the imperative using theories such as the theory of functional styles, the theory of discourse, the theory of speech acts and the theory of grammar of constructions. The relevance of the article is due to the need to study the semantic features of the imperative statements of the modern Kyrgyz language. The purpose of this work is to show the features of imperatives, to carefully study the types and functions of imperatives in the Kyrgyz language. The data were analyzed using a comparative analysis to find out the differences and similarities of the imperatives of the Kyrgyz language with other languages. The study belongs to qualitative research, as it was conducted using the method of contrast analysis as a comparison of languages. The object of the research is the data imperatives were taken from the sources that were needed for the research.


2022 ◽  
Vol 2022 ◽  
pp. 1-8
Author(s):  
Lijun Qiao

In practical terms, teachers are supported to use more straightforward teaching methods, such as creating real-life contextual problems, to help students develop deep learning skills. In this paper, using Bayesian theory and Bayesian classifier research methods, a machine learning model was constructed using Python to establish the correspondence between online teaching of civics and high-level semantic features and to achieve computer learning through text and teaching design evaluation research that can identify high-frequency knowledge points. The inter-relationship model knowledge mapping, the accuracy is 90%, and the continuous knowledge update help to improve the model accuracy.


Semantic Web ◽  
2022 ◽  
pp. 1-34
Author(s):  
Sebastian Monka ◽  
Lavdim Halilaj ◽  
Achim Rettinger

The information perceived via visual observations of real-world phenomena is unstructured and complex. Computer vision (CV) is the field of research that attempts to make use of that information. Recent approaches of CV utilize deep learning (DL) methods as they perform quite well if training and testing domains follow the same underlying data distribution. However, it has been shown that minor variations in the images that occur when these methods are used in the real world can lead to unpredictable and catastrophic errors. Transfer learning is the area of machine learning that tries to prevent these errors. Especially, approaches that augment image data using auxiliary knowledge encoded in language embeddings or knowledge graphs (KGs) have achieved promising results in recent years. This survey focuses on visual transfer learning approaches using KGs, as we believe that KGs are well suited to store and represent any kind of auxiliary knowledge. KGs can represent auxiliary knowledge either in an underlying graph-structured schema or in a vector-based knowledge graph embedding. Intending to enable the reader to solve visual transfer learning problems with the help of specific KG-DL configurations we start with a description of relevant modeling structures of a KG of various expressions, such as directed labeled graphs, hypergraphs, and hyper-relational graphs. We explain the notion of feature extractor, while specifically referring to visual and semantic features. We provide a broad overview of knowledge graph embedding methods and describe several joint training objectives suitable to combine them with high dimensional visual embeddings. The main section introduces four different categories on how a KG can be combined with a DL pipeline: 1) Knowledge Graph as a Reviewer; 2) Knowledge Graph as a Trainee; 3) Knowledge Graph as a Trainer; and 4) Knowledge Graph as a Peer. To help researchers find meaningful evaluation benchmarks, we provide an overview of generic KGs and a set of image processing datasets and benchmarks that include various types of auxiliary knowledge. Last, we summarize related surveys and give an outlook about challenges and open issues for future research.


2022 ◽  
Author(s):  
Yujia Peng ◽  
Joseph M Burling ◽  
Greta K Todorova ◽  
Catherine Neary ◽  
Frank E Pollick ◽  
...  

When viewing the actions of others, we not only see patterns of body movements, but we also "see" the intentions and social relations of people, enabling us to understand the surrounding social environment. Previous research has shown that experienced forensic examiners, Closed Circuit Television (CCTV) operators, convey superior performance in identifying and predicting hostile intentions from surveillance footages than novices. However, it remains largely unknown what visual content CCTV operators actively attend to when viewing surveillance footage, and whether CCTV operators develop different strategies for active information seeking from what novices do. In this study, we conducted computational analysis for the gaze-centered stimuli captured by experienced CCTV operators and novices' eye movements when they viewed the same surveillance footage. These analyses examined how low-level visual features and object-level semantic features contribute to attentive gaze patterns associated with the two groups of participants. Low-level image features were extracted by a visual saliency model, whereas object-level semantic features were extracted by a deep convolutional neural network (DCNN), AlexNet, from gaze-centered regions. We found that visual regions attended by CCTV operators versus by novices can be reliably classified by patterns of saliency features and DCNN features. Additionally, CCTV operators showed greater inter-subject correlation in attending to saliency features and DCNN features than did novices. These results suggest that the looking behavior of CCTV operators differs from novices by actively attending to different patterns of saliency and semantic features in both low-level and high-level visual processing. Expertise in selectively attending to informative features at different levels of visual hierarchy may play an important role in facilitating the efficient detection of social relationships between agents and the prediction of harmful intentions.


2022 ◽  
Author(s):  
Yuehua Zhao ◽  
Ma Jie ◽  
Chong Nannan ◽  
Wen Junjie

Abstract Real time large scale point cloud segmentation is an important but challenging task for practical application like autonomous driving. Existing real time methods have achieved acceptance performance by aggregating local information. However, most of them only exploit local spatial information or local semantic information dependently, few considering the complementarity of both. In this paper, we propose a model named Spatial-Semantic Incorporation Network (SSI-Net) for real time large scale point cloud segmentation. A Spatial-Semantic Cross-correction (SSC) module is introduced in SSI-Net as a basic unit. High quality contextual features can be learned through SSC by correct and update semantic features using spatial cues, and vice verse. Adopting the plug-and-play SSC module, we design SSI-Net as an encoder-decoder architecture. To ensure efficiency, it also adopts a random sample based hierarchical network structure. Extensive experiments on several prevalent datasets demonstrate that our method can achieve state-of-the-art performance.


Sign in / Sign up

Export Citation Format

Share Document