scholarly journals Extracting Multilingual Natural-Language Patterns for RDF Predicates

Author(s):  
Daniel Gerber ◽  
Axel-Cyrille Ngonga Ngomo
Author(s):  
Alexandra Galatescu

The proposed translation of natural language (NL) patterns to object and process modeling is seen as an alternative to the symbolic notations, textual languages or classical semantic networks, the main representation tools today. Its necessity is motivated by the universality, unifying abilities, natural extensibility, logic and reusability of NL. The translation relies on a formalized, stylized and graphical representation of NL, bridging NL to an integrated view on the object and process modeling. Only the morphological and syntactic knowledge in NL is subject to translation, but the proposed solution anticipates the semantic and logical interpretation of a model. A brief presentation and exemplification of NL patterns in consideration precede the translation.


2019 ◽  
Author(s):  
Honghan Wu ◽  
Karen Hodgson ◽  
Sue Dyson ◽  
Katherine I Morley ◽  
Zina M Ibrahim ◽  
...  

BACKGROUND Much effort has been put into the use of automated approaches, such as natural language processing (NLP), to mine or extract data from free-text medical records in order to construct comprehensive patient profiles for delivering better health care. Reusing NLP models in new settings, however, remains cumbersome, as it requires validation and retraining on new data iteratively to achieve convergent results. OBJECTIVE The aim of this work is to minimize the effort involved in reusing NLP models on free-text medical records. METHODS We formally define and analyze the model adaptation problem in phenotype-mention identification tasks. We identify “duplicate waste” and “imbalance waste,” which collectively impede efficient model reuse. We propose a phenotype embedding–based approach to minimize these sources of waste without the need for labelled data from new settings. RESULTS We conduct experiments on data from a large mental health registry to reuse NLP models in four phenotype-mention identification tasks. The proposed approach can choose the best model for a new task, identifying up to 76% waste (duplicate waste), that is, phenotype mentions without the need for validation and model retraining and with very good performance (93%-97% accuracy). It can also provide guidance for validating and retraining the selected model for novel language patterns in new tasks, saving around 80% waste (imbalance waste), that is, the effort required in “blind” model-adaptation approaches. CONCLUSIONS Adapting pretrained NLP models for new tasks can be more efficient and effective if the language pattern landscapes of old settings and new settings can be made explicit and comparable. Our experiments show that the phenotype-mention embedding approach is an effective way to model language patterns for phenotype-mention identification tasks and that its use can guide efficient NLP model reuse.


1999 ◽  
Vol 11 (3) ◽  
pp. 369-390 ◽  
Author(s):  
Shlomo Argamon-Engelson ◽  
Ido Dagan ◽  
Yuval Krymolowski

10.2196/14782 ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. e14782 ◽  
Author(s):  
Honghan Wu ◽  
Karen Hodgson ◽  
Sue Dyson ◽  
Katherine I Morley ◽  
Zina M Ibrahim ◽  
...  

Background Much effort has been put into the use of automated approaches, such as natural language processing (NLP), to mine or extract data from free-text medical records in order to construct comprehensive patient profiles for delivering better health care. Reusing NLP models in new settings, however, remains cumbersome, as it requires validation and retraining on new data iteratively to achieve convergent results. Objective The aim of this work is to minimize the effort involved in reusing NLP models on free-text medical records. Methods We formally define and analyze the model adaptation problem in phenotype-mention identification tasks. We identify “duplicate waste” and “imbalance waste,” which collectively impede efficient model reuse. We propose a phenotype embedding–based approach to minimize these sources of waste without the need for labelled data from new settings. Results We conduct experiments on data from a large mental health registry to reuse NLP models in four phenotype-mention identification tasks. The proposed approach can choose the best model for a new task, identifying up to 76% waste (duplicate waste), that is, phenotype mentions without the need for validation and model retraining and with very good performance (93%-97% accuracy). It can also provide guidance for validating and retraining the selected model for novel language patterns in new tasks, saving around 80% waste (imbalance waste), that is, the effort required in “blind” model-adaptation approaches. Conclusions Adapting pretrained NLP models for new tasks can be more efficient and effective if the language pattern landscapes of old settings and new settings can be made explicit and comparable. Our experiments show that the phenotype-mention embedding approach is an effective way to model language patterns for phenotype-mention identification tasks and that its use can guide efficient NLP model reuse.


Sign in / Sign up

Export Citation Format

Share Document