natural language semantic
Recently Published Documents


TOTAL DOCUMENTS

23
(FIVE YEARS 9)

H-INDEX

4
(FIVE YEARS 1)

2022 ◽  
Vol 14 (2) ◽  
pp. 1-24
Author(s):  
Bin Wang ◽  
Pengfei Guo ◽  
Xing Wang ◽  
Yongzhong He ◽  
Wei Wang

Aspect-level sentiment analysis identifies fine-grained emotion for target words. There are three major issues in current models of aspect-level sentiment analysis. First, few models consider the natural language semantic characteristics of the texts. Second, many models consider the location characteristics of the target words, but ignore the relationships among the target words and among the overall sentences. Third, many models lack transparency in data collection, data processing, and results generating in sentiment analysis. In order to resolve these issues, we propose an aspect-level sentiment analysis model that combines a bidirectional Long Short-Term Memory (LSTM) network and a Graph Convolutional Network (GCN) based on Dependency syntax analysis (Bi-LSTM-DGCN). Our model integrates the dependency syntax analysis of the texts, and explicitly considers the natural language semantic characteristics of the texts. It further fuses the target words and overall sentences. Extensive experiments are conducted on four benchmark datasets, i.e., Restaurant14, Laptop, Restaurant16, and Twitter. The experimental results demonstrate that our model outperforms other models like Target-Dependent LSTM (TD-LSTM), Attention-based LSTM with Aspect Embedding (ATAE-LSTM), LSTM+SynATT+TarRep and Convolution over a Dependency Tree (CDT). Our model is further applied to aspect-level sentiment analysis on “government” and “lockdown” of 1,658,250 tweets about “#COVID-19” that we collected from March 1, 2020 to July 1, 2020. The experimental results show that Twitter users’ positive and negative sentiments fluctuated over time. Through the transparency analysis in data collection, data processing, and results generating, we discuss the reasons for the evolution of users’ emotions over time based on the tweets and on our models.


2021 ◽  
pp. 200-207
Author(s):  
Zhu Ping ◽  

Natural language semantic engineering problems are faced with unknown input and intensive knowledge challenges. In order to adapt to the featuresof natural language semantic engineering, the AI programinglanguage needs to be expanded mathematically: 1) Using many ways to improve the spatial distribution and coverage of instances; 2) Keeping different abstract function versions running at the same time; 3) Providing a large numberof knowledge configuration files and supporting functions to deal with intensive knowledge problems; 4) Using the most possibilitypriority call to solve the problem of multiple running branchestraversal. This paper introduces the unknown oriented programming ideas, basic strategy formulation,language design and simulation running examples. It provides a new method for the incremental research and development of large-scale natural language semantic engineeringapplication. Finally, this paper summarizes the full text and puts forward the further research direction.


2019 ◽  
Vol 43 (3) ◽  
pp. 499-532
Author(s):  
Patrick Duffley

Abstract This article argues that the logical paraphrases used to describe the meanings of must, need, may, and can obscure the natural-language semantic interaction between these verbs and negation. The purported non-negatability of must is argued to be an illusion created by the indicative-mood paraphrase ‘is necessary’, which treats the necessity as a reality rather than a non-reality. It is proposed that negation coalesces with the modality that must itself expresses to produce a negatively-charged version of must’s modality: the subject of musn’t is represented as being in a state of constraint in which the only possibility open to the subject is oriented in the opposite direction to the realization of the infinitive’s event. The study also constitutes an argument against a lexicalization analysis: in the combination mustn’t, must and not each contribute their own meaning to the resultant sense, but according to their conceptual status as inherently irrealis notions.


2018 ◽  
Author(s):  
E. Darío Gutiérrez ◽  
Amit Dhurandhar ◽  
Andreas Keller ◽  
Pablo Meyer ◽  
Guillermo A. Cecchi

There has been recent progress in predicting whether common verbal descriptors such as “fishy”, “floral” or “fruity” apply to the smell of odorous molecules. However, the number of descriptors for which such a prediction is possible to date is very small compared to the large number of descriptors that have been suggested for the profiling of smells. We show here that the use of natural language semantic representations on a small set of general olfactory perceptual descriptors allows for the accurate inference of perceptual ratings for mono-molecular odorants over a large and potentially arbitrary set of descriptors. This is a noteworthy approach given that the prevailing view is that human’s capacity to identify or characterize odors by name is poor [1, 2, 3, 4, 5]. Our methods, when combined with a molecule-to-ratings model using chemoinformatic features, also allow for the zero-shot learning inference [6, 7] of perceptual ratings for arbitrary molecules. We successfully applied our semantics-based approach to predict perceptual ratings with an accuracy higher than 0.5 for up to 70 olfactory perceptual descriptors in a well-known dataset, a ten-fold increase in the number of descriptors from previous attempts. Moreover we accurately predict paradigm odors of four common families of molecules with an AUC of up to 0.75. Our approach solves the need for the consuming task of handcrafting domain specific sets of descriptors in olfaction and collecting ratings for large numbers of descriptors and odorants [8, 9, 10, 11] while establishing that the semantic distance between descriptors defines the equivalent of an odorwheel.


Sign in / Sign up

Export Citation Format

Share Document