scholarly journals Go From the General to the Particular: Multi-Domain Translation with Domain Transformation Networks

2020 ◽  
Vol 34 (05) ◽  
pp. 9233-9241
Author(s):  
Yong Wang ◽  
Longyue Wang ◽  
Shuming Shi ◽  
Victor O.K. Li ◽  
Zhaopeng Tu

The key challenge of multi-domain translation lies in simultaneously encoding both the general knowledge shared across domains and the particular knowledge distinctive to each domain in a unified model. Previous work shows that the standard neural machine translation (NMT) model, trained on mixed-domain data, generally captures the general knowledge, but misses the domain-specific knowledge. In response to this problem, we augment NMT model with additional domain transformation networks to transform the general representations to domain-specific representations, which are subsequently fed to the NMT decoder. To guarantee the knowledge transformation, we also propose two complementary supervision signals by leveraging the power of knowledge distillation and adversarial learning. Experimental results on several language pairs, covering both balanced and unbalanced multi-domain translation, demonstrate the effectiveness and universality of the proposed approach. Encouragingly, the proposed unified model achieves comparable results with the fine-tuning approach that requires multiple models to preserve the particular knowledge. Further analyses reveal that the domain transformation networks successfully capture the domain-specific knowledge as expected.1

2021 ◽  
Vol 13 (4) ◽  
pp. 2276
Author(s):  
Taejin Kim ◽  
Yeoil Yun ◽  
Namgyu Kim

Many attempts have been made to construct new domain-specific knowledge graphs using the existing knowledge base of various domains. However, traditional “dictionary-based” or “supervised” knowledge graph building methods rely on predefined human-annotated resources of entities and their relationships. The cost of creating human-annotated resources is high in terms of both time and effort. This means that relying on human-annotated resources will not allow rapid adaptability in describing new knowledge when domain-specific information is added or updated very frequently, such as with the recent coronavirus disease-19 (COVID-19) pandemic situation. Therefore, in this study, we propose an Open Information Extraction (OpenIE) system based on unsupervised learning without a pre-built dataset. The proposed method obtains knowledge from a vast amount of text documents about COVID-19 rather than a general knowledge base and add this to the existing knowledge graph. First, we constructed a COVID-19 entity dictionary, and then we scraped a large text dataset related to COVID-19. Next, we constructed a COVID-19 perspective language model by fine-tuning the bidirectional encoder representations from transformer (BERT) pre-trained language model. Finally, we defined a new COVID-19-specific knowledge base by extracting connecting words between COVID-19 entities using the BERT self-attention weight from COVID-19 sentences. Experimental results demonstrated that the proposed Co-BERT model outperforms the original BERT in terms of mask prediction accuracy and metric for evaluation of translation with explicit ordering (METEOR) score.


2014 ◽  
Vol 10 (3) ◽  
pp. 249-261 ◽  
Author(s):  
Tessa Sanderson ◽  
Jo Angouri

The active involvement of patients in decision-making and the focus on patient expertise in managing chronic illness constitutes a priority in many healthcare systems including the NHS in the UK. With easier access to health information, patients are almost expected to be (or present self) as an ‘expert patient’ (Ziebland 2004). This paper draws on the meta-analysis of interview data collected for identifying treatment outcomes important to patients with rheumatoid arthritis (RA). Taking a discourse approach to identity, the discussion focuses on the resources used in the negotiation and co-construction of expert identities, including domain-specific knowledge, access to institutional resources, and ability to self-manage. The analysis shows that expertise is both projected (institutionally sanctioned) and claimed by the patient (self-defined). We close the paper by highlighting the limitations of our pilot study and suggest avenues for further research.


1998 ◽  
Vol 10 (1) ◽  
pp. 1-34 ◽  
Author(s):  
Alfonso Caramazza ◽  
Jennifer R. Shelton

We claim that the animate and inanimate conceptual categories represent evolutionarily adapted domain-specific knowledge systems that are subserved by distinct neural mechanisms, thereby allowing for their selective impairment in conditions of brain damage. On this view, (some of) the category-specific deficits that have recently been reported in the cognitive neuropsychological literature—for example, the selective damage or sparing of knowledge about animals—are truly categorical effects. Here, we articulate and defend this thesis against the dominant, reductionist theory of category-specific deficits, which holds that the categorical nature of the deficits is the result of selective damage to noncategorically organized visual or functional semantic subsystems. On the latter view, the sensory/functional dimension provides the fundamental organizing principle of the semantic system. Since, according to the latter theory, sensory and functional properties are differentially important in determining the meaning of the members of different semantic categories, selective damage to the visual or the functional semantic subsystem will result in a category-like deficit. A review of the literature and the results of a new case of category-specific deficit will show that the domain-specific knowledge framework provides a better account of category-specific deficits than the sensory/functional dichotomy theory.


Author(s):  
Shaw C. Feng ◽  
William Z. Bernstein ◽  
Thomas Hedberg ◽  
Allison Barnard Feeney

The need for capturing knowledge in the digital form in design, process planning, production, and inspection has increasingly become an issue in manufacturing industries as the variety and complexity of product lifecycle applications increase. Both knowledge and data need to be well managed for quality assurance, lifecycle impact assessment, and design improvement. Some technical barriers exist today that inhibit industry from fully utilizing design, planning, processing, and inspection knowledge. The primary barrier is a lack of a well-accepted mechanism that enables users to integrate data and knowledge. This paper prescribes knowledge management to address a lack of mechanisms for integrating, sharing, and updating domain-specific knowledge in smart manufacturing (SM). Aspects of the knowledge constructs include conceptual design, detailed design, process planning, material property, production, and inspection. The main contribution of this paper is to provide a methodology on what knowledge manufacturing organizations access, update, and archive in the context of SM. The case study in this paper provides some example knowledge objects to enable SM.


2017 ◽  
Author(s):  
Marilena Oita ◽  
Antoine Amarilli ◽  
Pierre Senellart

Deep Web databases, whose content is presented as dynamically-generated Web pages hidden behind forms, have mostly been left unindexed by search engine crawlers. In order to automatically explore this mass of information, many current techniques assume the existence of domain knowledge, which is costly to create and maintain. In this article, we present a new perspective on form understanding and deep Web data acquisition that does not require any domain-specific knowledge. Unlike previous approaches, we do not perform the various steps in the process (e.g., form understanding, record identification, attribute labeling) independently but integrate them to achieve a more complete understanding of deep Web sources. Through information extraction techniques and using the form itself for validation, we reconcile input and output schemas in a labeled graph which is further aligned with a generic ontology. The impact of this alignment is threefold: first, the resulting semantic infrastructure associated with the form can assist Web crawlers when probing the form for content indexing; second, attributes of response pages are labeled by matching known ontology instances, and relations between attributes are uncovered; and third, we enrich the generic ontology with facts from the deep Web.


Author(s):  
M. Ben Ellefi ◽  
P. Drap ◽  
O. Papini ◽  
D. Merad ◽  
J. P. Royer ◽  
...  

<p><strong>Abstract.</strong> A key challenge in cultural heritage (CH) sites visualization is to provide models and tools that effectively integrate the content of a CH data with domain-specific knowledge so that the users can query, interpret and consume the visualized information. Moreover, it is important that the intelligent visualization systems are interoperable in the semantic web environment and thus, capable of establishing a methodology to acquire, integrate, analyze, generate and share numeric contents and associated knowledge in human and machine-readable Web. In this paper, we present a model, a methodology and a software Web-tools that support the coupling of the 2D/3D Web representation with the knowledge graph database of <i>Xlendi</i> shipwreck. The Web visualization tools and the knowledge-based techniques are married into a photogrammetry driven ontological model while at the same time, user-friendly web tools for querying and semantic consumption of the shipwreck information are introduced.</p>


Sign in / Sign up

Export Citation Format

Share Document