scholarly journals Educing knowledge from text: semantic information extraction of spatial concepts and places

2021 ◽  
Vol 2 ◽  
pp. 1-7
Author(s):  
Evangelos Papadias ◽  
Margarita Kokla ◽  
Eleni Tomai

Abstract. A growing body of geospatial research has shifted the focus from fully structured to semistructured and unstructured content written in natural language. Natural language texts provide a wealth of knowledge about geospatial concepts, places, events, and activities that needs to be extracted and formalized to support semantic annotation, knowledge-based exploration, and semantic search. The paper presents a web-based prototype for the extraction of geospatial entities and concepts, and the subsequent semantic visualization and interactive exploration of the extraction results. A lightweight ontology anchored in natural language guides the interpretation of natural language texts and the extraction of relevant domain knowledge. The approach is applied on three heterogeneous sources which provide a wealth of spatial concepts and place names.

Author(s):  
Naveen Ashish ◽  
Sharad Mehrotra

The authors present the XAR framework that allows for free text information extraction and semantic annotation. The language underpinning XAR, the authors argue, allows for the inclusion of probabilistic reasoning with the rule language, provides higher level predicates capturing text features and relationships, and defines and supports advanced features such as token consumption and stratified negotiation in the rule language and semantics. The XAR framework also allows the incorporation of semantic information as integrity constraints in the extraction and annotation process. The XAR framework aims to fill in a gap, the authors claim, in the Web based information extraction systems. XAR provides an extraction and annotation framework by permitting the integrated use of hand-crafted extraction rules, machine-learning based extractors, and semantic information about the particular domain of interest. The XAR system has been deployed in an emergency response scenario with civic agencies in North America and in a scenario with an IT department of a county level community clinic.


2021 ◽  
Vol 2 (4) ◽  
Author(s):  
Tiago Mota ◽  
Mohan Sridharan ◽  
Aleš Leonardis

AbstractA robot’s ability to provide explanatory descriptions of its decisions and beliefs promotes effective collaboration with humans. Providing the desired transparency in decision making is challenging in integrated robot systems that include knowledge-based reasoning methods and data-driven learning methods. As a step towards addressing this challenge, our architecture combines the complementary strengths of non-monotonic logical reasoning with incomplete commonsense domain knowledge, deep learning, and inductive learning. During reasoning and learning, the architecture enables a robot to provide on-demand explanations of its decisions, the evolution of associated beliefs, and the outcomes of hypothetical actions, in the form of relational descriptions of relevant domain objects, attributes, and actions. The architecture’s capabilities are illustrated and evaluated in the context of scene understanding tasks and planning tasks performed using simulated images and images from a physical robot manipulating tabletop objects. Experimental results indicate the ability to reliably acquire and merge new information about the domain in the form of constraints, preconditions, and effects of actions, and to provide accurate explanations in the presence of noisy sensing and actuation.


2021 ◽  
Vol 14 (3) ◽  
pp. 38-57
Author(s):  
Tuan-Dung Cao ◽  
Quang-Minh Nguyen

The heterogeneity and the increasing amount of the news published on the web create challenges in accessing them. In the authors' previous studies, they introduced a semantic web-based sports news aggregation system called BKSport, which manages to generate metadata for every news item. Providing an intuitive and expressive way to retrieve information and exploiting the advantages of semantic search technique is within their consideration. In this paper, they propose a method to transform natural language questions into SPARQL queries, which could be applied to existing semantic data. This method is mainly based on the following tasks: the construction of a semantic model representing a question, detection of ontology vocabularies and knowledge base elements in question, and their mapping to generate a query. Experiments are performed on a set of questions belonging to various categories, and the results show that the proposed method provides high precision.


2017 ◽  
Vol 29 (1) ◽  
pp. 57-72
Author(s):  
Marcelo SCHIESSL ◽  
Marisa BRÄSCHER

Abstract The proposal presented in this study seeks to properly represent natural language to ontologies and vice-versa. Therefore, the semi-automatic creation of a lexical database in Brazilian Portuguese containing morphological, syntactic, and semantic information that can be read by machines was proposed, allowing the link between structured and unstructured data and its integration into an information retrieval model to improve precision. The results obtained demonstrated that the methodology can be used in the risco financeiro (financial risk) domain in Portuguese for the construction of an ontology and the lexical-semantic database and the proposal of a semantic information retrieval model. In order to evaluate the performance of the proposed model, documents containing the main definitions of the financial risk domain were selected and indexed with and without semantic annotation. To enable the comparison between the approaches, two databases were created based on the texts with the semantic annotations to represent the semantic search. The first one represents the traditional search and the second contained the index built based on the texts with the semantic annotations to represent the semantic search. The evaluation of the proposal was based on recall and precision. The queries submitted to the model showed that the semantic search outperforms the traditional search and validates the methodology used. Although more complex, the procedure proposed can be used in all kinds of domains.


2012 ◽  
Vol 8 (3) ◽  
pp. 34-53 ◽  
Author(s):  
Hamed Fazlollahtabar ◽  
Amir Muhammadzadeh

The Internet and the World Wide Web in particular provide a unique platform to connect learners with educational resources. Educational material in hypermedia formed in a Web-based educational system makes learning a task-driven process, motivating learners to explore alternative navigational paths through the domain knowledge and from different resources around the globe. Many researchers have focused on developing e-learning systems with personalized learning mechanisms to assist on-line Web-based learning and to adaptively provide learning paths. Although most personalized systems consider learner preferences, interests and browsing behaviors when providing personalized curriculum sequencing services, these systems usually neglect to consider whether learner ability and the difficulty level of the recommended curriculums are matched to each other. Therefore, the authors proposed approach is based on an integer program (IP) to optimize user curriculum accompanying with fuzzy logic approach which analyze the effective criteria by linguistic variables in a knowledge based system. The effectiveness of the proposed framework is shown by numerical illustrations which are inferenced from the designed user interface.


2017 ◽  
pp. 030-050
Author(s):  
J.V. Rogushina ◽  

Problems associated with the improve ment of information retrieval for open environment are considered and the need for it’s semantization is grounded. Thecurrent state and prospects of development of semantic search engines that are focused on the Web information resources processing are analysed, the criteria for the classification of such systems are reviewed. In this analysis the significant attention is paid to the semantic search use of ontologies that contain knowledge about the subject area and the search users. The sources of ontological knowledge and methods of their processing for the improvement of the search procedures are considered. Examples of semantic search systems that use structured query languages (eg, SPARQL), lists of keywords and queries in natural language are proposed. Such criteria for the classification of semantic search engines like architecture, coupling, transparency, user context, modification requests, ontology structure, etc. are considered. Different ways of support of semantic and otology based modification of user queries that improve the completeness and accuracy of the search are analyzed. On base of analysis of the properties of existing semantic search engines in terms of these criteria, the areas for further improvement of these systems are selected: the development of metasearch systems, semantic modification of user requests, the determination of an user-acceptable transparency level of the search procedures, flexibility of domain knowledge management tools, increasing productivity and scalability. In addition, the development of means of semantic Web search needs in use of some external knowledge base which contains knowledge about the domain of user information needs, and in providing the users with the ability to independent selection of knowledge that is used in the search process. There is necessary to take into account the history of user interaction with the retrieval system and the search context for personalization of the query results and their ordering in accordance with the user information needs. All these aspects were taken into account in the design and implementation of semantic search engine "MAIPS" that is based on an ontological model of users and resources cooperation into the Web.


1982 ◽  
Author(s):  
Ralph Grishman ◽  
Lynette Hirschman ◽  
Carol Friedman

2018 ◽  
Vol 23 (3) ◽  
pp. 175-191
Author(s):  
Anneke Annassia Putri Siswadi ◽  
Avinanta Tarigan

To fulfill the prospective student's information need about student admission, Gunadarma University has already many kinds of services which are time limited, such as website, book, registration place, Media Information Center, and Question Answering’s website (UG-Pedia). It needs a service that can serve them anytime and anywhere. Therefore, this research is developing the UGLeo as a web based QA intelligence chatbot application for Gunadarma University's student admission portal. UGLeo is developed by MegaHal style which implements the Markov Chain method. In this research, there are some modifications in MegaHal style, those modifications are the structure of natural language processing and the structure of database. The accuracy of UGLeo reply is 65%. However, to increase the accuracy there are some improvements to be applied in UGLeo system, both improvement in natural language processing and improvement in MegaHal style.


Sign in / Sign up

Export Citation Format

Share Document