Workflows Científicos com Apoio de Bases de Conhecimento em Tempo Real

2020 ◽  
Author(s):  
Victor S. Bursztyn ◽  
Jonas Dias ◽  
Marta Mattoso

One major challenge in large-scale experiments is the analytical capacity to contrast ongoing results with domain knowledge. We approach this challenge by constructing a domain-specific knowledge base, which is queried during workflow execution. We introduce K-Chiron, an integrated solution that combines a state-of-the-art automatic knowledge base construction (KBC) system to Chiron, a well-established workflow engine. In this work we experiment in the context of Political Sciences to show how KBC may be used to improve human-in-the-loop (HIL) support in scientific experiments. While HIL in traditional domain expert supervision is done offline, in K-Chiron it is done online, i.e. at runtime. We achieve results in less laborious ways, to the point of enabling a breed of experiments that could be unfeasible with traditional HIL. Finally, we show how provenance data could be leveraged with KBC to enable further experimentation in more dynamic settings.

2018 ◽  
Author(s):  
Victor S. Bursztyn ◽  
Jonas Dias ◽  
Marta Mattoso

One major challenge in large-scale experiments is the analytical capacity to contrast ongoing results with domain knowledge. We approach this challenge by constructing a domain-specific knowledge base, which is queried during workflow execution. We introduce K-Chiron, an integrated solution that combines a state-of-the-art automatic knowledge base construction (KBC) system to Chiron, a well-established workflow engine. In this work we experiment in the context of Political Sciences to show how KBC may be used to improve human-in-the-loop (HIL) support in scientific experiments. While HIL in traditional domain expert supervision is done offline, in K-Chiron it is done online, i.e. at runtime. We achieve results in less laborious ways, to the point of enabling a breed of experiments that could be unfeasible with traditional HIL. Finally, we show how provenance data could be leveraged with KBC to enable further experimentation in more dynamic settings.  


2020 ◽  
Vol 34 (03) ◽  
pp. 2901-2908 ◽  
Author(s):  
Weijie Liu ◽  
Peng Zhou ◽  
Zhe Zhao ◽  
Zhiruo Wang ◽  
Qi Ju ◽  
...  

Pre-trained language representation models, such as BERT, capture a general language representation from large-scale corpora, but lack domain-specific knowledge. When reading a domain text, experts make inferences with relevant knowledge. For machines to achieve this capability, we propose a knowledge-enabled language representation model (K-BERT) with knowledge graphs (KGs), in which triples are injected into the sentences as domain knowledge. However, too much knowledge incorporation may divert the sentence from its correct meaning, which is called knowledge noise (KN) issue. To overcome KN, K-BERT introduces soft-position and visible matrix to limit the impact of knowledge. K-BERT can easily inject domain knowledge into the models by being equipped with a KG without pre-training by itself because it is capable of loading model parameters from the pre-trained BERT. Our investigation reveals promising results in twelve NLP tasks. Especially in domain-specific tasks (including finance, law, and medicine), K-BERT significantly outperforms BERT, which demonstrates that K-BERT is an excellent choice for solving the knowledge-driven problems that require experts.


2017 ◽  
Author(s):  
Marilena Oita ◽  
Antoine Amarilli ◽  
Pierre Senellart

Deep Web databases, whose content is presented as dynamically-generated Web pages hidden behind forms, have mostly been left unindexed by search engine crawlers. In order to automatically explore this mass of information, many current techniques assume the existence of domain knowledge, which is costly to create and maintain. In this article, we present a new perspective on form understanding and deep Web data acquisition that does not require any domain-specific knowledge. Unlike previous approaches, we do not perform the various steps in the process (e.g., form understanding, record identification, attribute labeling) independently but integrate them to achieve a more complete understanding of deep Web sources. Through information extraction techniques and using the form itself for validation, we reconcile input and output schemas in a labeled graph which is further aligned with a generic ontology. The impact of this alignment is threefold: first, the resulting semantic infrastructure associated with the form can assist Web crawlers when probing the form for content indexing; second, attributes of response pages are labeled by matching known ontology instances, and relations between attributes are uncovered; and third, we enrich the generic ontology with facts from the deep Web.


2005 ◽  
Vol 19 (2) ◽  
pp. 57-77 ◽  
Author(s):  
Gregory J. Gerard

Most database textbooks on conceptual modeling do not cover domainspecific patterns. The texts emphasize notation, apparently assuming that notation enables individuals to correctly model domain-specific knowledge acquired from experience. However, the domain knowledge acquired may not aid in the construction of conceptual models if it is not structured to support conceptual modeling. This study uses the Resources Events Agents (REA) pattern as an example of a domain-specific pattern that can be encoded as a knowledge structure for conceptual modeling of accounting information systems (AIS), and tests its effects on the accuracy of conceptual modeling in a familiar business setting. Fifty-three undergraduate and forty-six graduate students completed recall tasks designed to measure REA knowledge structure. The accuracy of participants' conceptual models was positively related to REA knowledge structure. Results suggest it is insufficient to know only conceptual modeling notation because structured knowledge of domain-specific patterns reduces design errors.


Author(s):  
Saira Gillani ◽  
Andrea Ko

Higher education and professional trainings often apply innovative e-learning systems, where ontologies are used for structuring domain knowledge. To provide up-to-date knowledge for the students, ontology has to be maintained regularly. It is especially true for IT audit and security domain, because technology is changing fast. However manual ontology population and enrichment is a complex task that require professional experience involving a lot of efforts. The authors' paper deals with the challenges and possible solutions for semi-automatic ontology enrichment and population. ProMine has two main contributions; one is the semantic-based text mining approach for automatically identifying domain-specific knowledge elements; the other is the automatic categorization of these extracted knowledge elements by using Wiktionary. ProMine ontology enrichment solution was applied in IT audit domain of an e-learning system. After ten cycles of the application ProMine, the number of automatically identified new concepts are tripled and ProMine categorized new concepts with high precision and recall.


2016 ◽  
Vol 34 (3) ◽  
pp. 435-456 ◽  
Author(s):  
Lixin Xia ◽  
Zhongyi Wang ◽  
Chen Chen ◽  
Shanshan Zhai

Purpose Opinion mining (OM), also known as “sentiment classification”, which aims to discover common patterns of user opinions from their textual statements automatically or semi-automatically, is not only useful for customers, but also for manufacturers. However, because of the complexity of natural language, there are still some problems, such as domain dependence of sentiment words, extraction of implicit features and others. The purpose of this paper is to propose an OM method based on topic maps to solve these problems. Design/methodology/approach Domain-specific knowledge is key to solve problems in feature-based OM. On the one hand, topic maps, as an ontology framework, are composed of topics, associations, occurrences and scopes, and can represent a class of knowledge representation schemes. On the other hand, compared with ontology, topic maps have many advantages. Thus, it is better to integrate domain-specific knowledge into OM based on topic maps. This method can make full use of the semantic relationships among feature words and sentiment words. Findings In feature-level OM, most of the existing research associate product features and opinions by their explicit co-occurrence, or use syntax parsing to judge the modification relationship between opinion words and product features within a review unit. They are mostly based on the structure of language units without considering domain knowledge. Only few methods based on ontology incorporate domain knowledge into feature-based OM, but they only use the “is-a” relation between concepts. Therefore, this paper proposes feature-based OM using topic maps. The experimental results revealed that this method can improve the accuracy of the OM. The findings of this study not only advance the state of OM research but also shed light on future research directions. Research limitations/implications To demonstrate the “feature-based OM using topic maps” applications, this work implements a prototype that helps users to find their new washing machines. Originality/value This paper presents a new method of feature-based OM using topic maps, which can integrate domain-specific knowledge into feature-based OM effectively. This method can improve the accuracy of the OM greatly. The proposed method can be applied across various application domains, such as e-commerce and e-government.


2021 ◽  
Vol 17 (8) ◽  
pp. e1009283
Author(s):  
Tomasz Konopka ◽  
Sandra Ng ◽  
Damian Smedley

Integrating reference datasets (e.g. from high-throughput experiments) with unstructured and manually-assembled information (e.g. notes or comments from individual researchers) has the potential to tailor bioinformatic analyses to specific needs and to lead to new insights. However, developing bespoke analysis pipelines from scratch is time-consuming, and general tools for exploring such heterogeneous data are not available. We argue that by treating all data as text, a knowledge-base can accommodate a range of bioinformatic data types and applications. We show that a database coupled to nearest-neighbor algorithms can address common tasks such as gene-set analysis as well as specific tasks such as ontology translation. We further show that a mathematical transformation motivated by diffusion can be effective for exploration across heterogeneous datasets. Diffusion enables the knowledge-base to begin with a sparse query, impute more features, and find matches that would otherwise remain hidden. This can be used, for example, to map multi-modal queries consisting of gene symbols and phenotypes to descriptions of diseases. Diffusion also enables user-driven learning: when the knowledge-base cannot provide satisfactory search results in the first instance, users can improve the results in real-time by adding domain-specific knowledge. User-driven learning has implications for data management, integration, and curation.


2021 ◽  
Vol 13 (4) ◽  
pp. 2276
Author(s):  
Taejin Kim ◽  
Yeoil Yun ◽  
Namgyu Kim

Many attempts have been made to construct new domain-specific knowledge graphs using the existing knowledge base of various domains. However, traditional “dictionary-based” or “supervised” knowledge graph building methods rely on predefined human-annotated resources of entities and their relationships. The cost of creating human-annotated resources is high in terms of both time and effort. This means that relying on human-annotated resources will not allow rapid adaptability in describing new knowledge when domain-specific information is added or updated very frequently, such as with the recent coronavirus disease-19 (COVID-19) pandemic situation. Therefore, in this study, we propose an Open Information Extraction (OpenIE) system based on unsupervised learning without a pre-built dataset. The proposed method obtains knowledge from a vast amount of text documents about COVID-19 rather than a general knowledge base and add this to the existing knowledge graph. First, we constructed a COVID-19 entity dictionary, and then we scraped a large text dataset related to COVID-19. Next, we constructed a COVID-19 perspective language model by fine-tuning the bidirectional encoder representations from transformer (BERT) pre-trained language model. Finally, we defined a new COVID-19-specific knowledge base by extracting connecting words between COVID-19 entities using the BERT self-attention weight from COVID-19 sentences. Experimental results demonstrated that the proposed Co-BERT model outperforms the original BERT in terms of mask prediction accuracy and metric for evaluation of translation with explicit ordering (METEOR) score.


Sign in / Sign up

Export Citation Format

Share Document