Object-Attribute Approach for Semantic Analysis of Natural Language

2021 ◽  
Vol 27 (5) ◽  
pp. 267-274
Author(s):  
S. M. Salibekyan ◽  

he article describes the methodology of semantic analysis of natural language (NL) and semantic search in it, which includes: the general stages of analysis of NL, the format of the semantic network for presenting the meaning of the text, words polysemy analysis, semantic and syntactic agreement of words, etc. The method is based on the object-attribute principle of organization of calculations and data structures, belonging to the dataflow class.

2020 ◽  
Vol 189 ◽  
pp. 03019
Author(s):  
Quan Yanan ◽  
Tan Fuqiang

At present, there are many movie reviews appear on main stream websites, and these evaluations are quite different to the same movie. As a customer, how to choose your favorite movie and television program? To solve this problem, this study attempts to use the semantic analysis of word vectors (Word2vec) semantic analysis in machine learning as a research tool to mine a large number of movie reviews. The research shows that most movie reviews have a certain theme cohesion and their semantic network has quite connected. Through the use of social network analysis and the use of Word2vec word vector technology in natural language processing, it is possible to present a streamlined movie review based on movie review network semantics and keyword extraction, thus helping to select the favorite movie review.


Author(s):  
A.S. Li ◽  
A.J.C. Trappey ◽  
C.V. Trappey

A registered trademark distinctively identifies a company, its products or services. A trademark (TM) is a type of intellectual property (IP) which is protected by the laws in the country where the trademark is officially registered. TM owners may take legal action when their IP rights are infringed upon. TM legal cases have grown in pace with the increasing number of TMs registered globally. In this paper, an intelligent recommender system automatically identifies similar TM case precedents for any given target case to support IP legal research. This study constructs the semantic network representing the TM legal scope and terminologies. A system is built to identify similar cases based on the machine-readable, frame-based knowledge representations of the judgments/documents. In this research, 4,835 US TM legal cases litigated in the US district and federal courts are collected as the experimental dataset. The computer-assisted system is constructed to extract critical features based on the ontology schema. The recommender will identify similar prior cases according to the values of their features embedded in these legal documents which include the case facts, issues under disputes, judgment holdings, and applicable rules and laws. Term frequency-inverse document frequency is used for text mining to discover the critical features of the litigated cases. Soft clustering algorithm, e.g., Latent Dirichlet Allocation, is applied to generate topics and the cases belonging to these topics. Thus, similar cases under each topic are identified for references. Through the analysis of the similarity between the cases based on the TM legal semantic analysis, the intelligent recommender provides precedents to support TM legal action and strategic planning.


2021 ◽  
Vol 47 (05) ◽  
Author(s):  
NGUYỄN CHÍ HIẾU

Knowledge Graphs are applied in many fields such as search engines, semantic analysis, and question answering in recent years. However, there are many obstacles for building knowledge graphs as methodologies, data and tools. This paper introduces a novel methodology to build knowledge graph from heterogeneous documents.  We use the methodologies of Natural Language Processing and deep learning to build this graph. The knowledge graph can use in Question answering systems and Information retrieval especially in Computing domain


Author(s):  
Katrin Erk

Computational semantics performs automatic meaning analysis of natural language. Research in computational semantics designs meaning representations and develops mechanisms for automatically assigning those representations and reasoning over them. Computational semantics is not a single monolithic task but consists of many subtasks, including word sense disambiguation, multi-word expression analysis, semantic role labeling, the construction of sentence semantic structure, coreference resolution, and the automatic induction of semantic information from data. The development of manually constructed resources has been vastly important in driving the field forward. Examples include WordNet, PropBank, FrameNet, VerbNet, and TimeBank. These resources specify the linguistic structures to be targeted in automatic analysis, and they provide high-quality human-generated data that can be used to train machine learning systems. Supervised machine learning based on manually constructed resources is a widely used technique. A second core strand has been the induction of lexical knowledge from text data. For example, words can be represented through the contexts in which they appear (called distributional vectors or embeddings), such that semantically similar words have similar representations. Or semantic relations between words can be inferred from patterns of words that link them. Wide-coverage semantic analysis always needs more data, both lexical knowledge and world knowledge, and automatic induction at least alleviates the problem. Compositionality is a third core theme: the systematic construction of structural meaning representations of larger expressions from the meaning representations of their parts. The representations typically use logics of varying expressivity, which makes them well suited to performing automatic inferences with theorem provers. Manual specification and automatic acquisition of knowledge are closely intertwined. Manually created resources are automatically extended or merged. The automatic induction of semantic information is guided and constrained by manually specified information, which is much more reliable. And for restricted domains, the construction of logical representations is learned from data. It is at the intersection of manual specification and machine learning that some of the current larger questions of computational semantics are located. For instance, should we build general-purpose semantic representations, or is lexical knowledge simply too domain-specific, and would we be better off learning task-specific representations every time? When performing inference, is it more beneficial to have the solid ground of a human-generated ontology, or is it better to reason directly with text snippets for more fine-grained and gradual inference? Do we obtain a better and deeper semantic analysis as we use better and deeper manually specified linguistic knowledge, or is the future in powerful learning paradigms that learn to carry out an entire task from natural language input and output alone, without pre-specified linguistic knowledge?


2015 ◽  
Vol 39 (2) ◽  
pp. 197-213 ◽  
Author(s):  
Ahmet Uyar ◽  
Farouk Musa Aliyu

Purpose – The purpose of this paper is to better understand three main aspects of semantic web search engines of Google Knowledge Graph and Bing Satori. The authors investigated: coverage of entity types, the extent of their support for list search services and the capabilities of their natural language query interfaces. Design/methodology/approach – The authors manually submitted selected queries to these two semantic web search engines and evaluated the returned results. To test the coverage of entity types, the authors selected the entity types from Freebase database. To test the capabilities of natural language query interfaces, the authors used a manually developed query data set about US geography. Findings – The results indicate that both semantic search engines cover only the very common entity types. In addition, the list search service is provided for a small percentage of entity types. Moreover, both search engines support queries with very limited complexity and with limited set of recognised terms. Research limitations/implications – Both companies are continually working to improve their semantic web search engines. Therefore, the findings show their capabilities at the time of conducting this research. Practical implications – The results show that in the near future the authors can expect both semantic search engines to expand their entity databases and improve their natural language interfaces. Originality/value – As far as the authors know, this is the first study evaluating any aspect of newly developing semantic web search engines. It shows the current capabilities and limitations of these semantic web search engines. It provides directions to researchers by pointing out the main problems for semantic web search engines.


Radiographics ◽  
2010 ◽  
Vol 30 (7) ◽  
pp. 2039-2048 ◽  
Author(s):  
Bao H. Do ◽  
Andrew Wu ◽  
Sandip Biswal ◽  
Aya Kamaya ◽  
Daniel L. Rubin

Author(s):  
DAN CORBETT

It has never been demonstrated that a pure semantic analysts of an English sentence can be accomplished without any aid from a syntactic analyzer. It has therefore become interesting to demonstrate semantic systems which can be guided by fast and efficient syntactic methods. We show that non-probabilistic, abductive techniques can be used in a hybrid network to correctly interpret the meaning of an English sentence. We discuss the implementation of an abductive system which uses heuristics working together with a semantic network in an attempt to eliminate uncertainty and ambiguity in natural language text.


Sign in / Sign up

Export Citation Format

Share Document