interaction extraction
Recently Published Documents


TOTAL DOCUMENTS

125
(FIVE YEARS 41)

H-INDEX

19
(FIVE YEARS 4)

Biomolecules ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 1591
Author(s):  
Prashant Srivastava ◽  
Saptarshi Bej ◽  
Kristina Yordanova ◽  
Olaf Wolkenhauer

For any molecule, network, or process of interest, keeping up with new publications on these is becoming increasingly difficult. For many cellular processes, the amount molecules and their interactions that need to be considered can be very large. Automated mining of publications can support large-scale molecular interaction maps and database curation. Text mining and Natural-Language-Processing (NLP)-based techniques are finding their applications in mining the biological literature, handling problems such as Named Entity Recognition (NER) and Relationship Extraction (RE). Both rule-based and Machine-Learning (ML)-based NLP approaches have been popular in this context, with multiple research and review articles examining the scope of such models in Biological Literature Mining (BLM). In this review article, we explore self-attention-based models, a special type of Neural-Network (NN)-based architecture that has recently revitalized the field of NLP, applied to biological texts. We cover self-attention models operating either at the sentence level or an abstract level, in the context of molecular interaction extraction, published from 2019 onwards. We conducted a comparative study of the models in terms of their architecture. Moreover, we also discuss some limitations in the field of BLM that identifies opportunities for the extraction of molecular interactions from biological text.


Author(s):  
Prashant Srivastava ◽  
Saptarshi Bej ◽  
Kristina Yordanova ◽  
Olaf Wolkenhauer

For any molecule, network, or process of interest, to keep up with new publications on these, is becoming increasingly difficult. For many cellular processes, molecules and their interactions that need to be considered can be very large. Automated mining of publications can support large scale molecular interaction maps and database curation. Text mining and Natural Language Processing (NLP)-based techniques are finding their applications in mining the biological literature, handling problems such as Named Entity Recognition (NER) and Relationship Extraction (RE). Both rule-based and machine learning (ML)-based NLP approaches have been popular in this context, with multiple research and review articles examining the scope of such models in Biological Literature Mining (BLM). In this review article, we explore self-attention based models, a special type of neural network (NN)-based architectures that have recently revitalized the field of NLP, applied to biological texts. We cover self-attention models operating either at a sentence level or an abstract level, in the context of molecular interaction extraction, published from 2019 onwards. We conduct a comparative study of the models in terms of their architecture. Moreover, we also discuss some limitations in the field of BLM that identifies opportunities for the extraction of molecular interactions from biological text.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Chengkun Wu ◽  
Xinyi Xiao ◽  
Canqun Yang ◽  
JinXiang Chen ◽  
Jiacai Yi ◽  
...  

Abstract Background Interactions of microbes and diseases are of great importance for biomedical research. However, large-scale of microbe–disease interactions are hidden in the biomedical literature. The structured databases for microbe–disease interactions are in limited amounts. In this paper, we aim to construct a large-scale database for microbe–disease interactions automatically. We attained this goal via applying text mining methods based on a deep learning model with a moderate curation cost. We also built a user-friendly web interface that allows researchers to navigate and query required information. Results Firstly, we manually constructed a golden-standard corpus and a sliver-standard corpus (SSC) for microbe–disease interactions for curation. Moreover, we proposed a text mining framework for microbe–disease interaction extraction based on a pretrained model BERE. We applied named entity recognition tools to detect microbe and disease mentions from the free biomedical texts. After that, we fine-tuned the pretrained model BERE to recognize relations between targeted entities, which was originally built for drug–target interactions or drug–drug interactions. The introduction of SSC for model fine-tuning greatly improved detection performance for microbe–disease interactions, with an average reduction in error of approximately 10%. The MDIDB website offers data browsing, custom searching for specific diseases or microbes, and batch downloading. Conclusions Evaluation results demonstrate that our method outperform the baseline model (rule-based PKDE4J) with an average $$F_1$$ F 1 -score of 73.81%. For further validation, we randomly sampled nearly 1000 predicted interactions by our model, and manually checked the correctness of each interaction, which gives a 73% accuracy. The MDIDB webiste is freely avaliable throuth http://dbmdi.com/index/


2021 ◽  
Author(s):  
Chengkun Wu ◽  
Xinyi Xiao ◽  
Canqun Yang ◽  
JinXiang Chen ◽  
Jiacai Yi ◽  
...  

Abstract Background: Interactions of microbes and diseases are of great importance for biomedical research. However, large-scale curated databases for microbe-disease interactions are missing, as the amount of related literature is enormous and the curation process is costly and time-consuming. In this paper, we aim to construct a large-scale database for microbe-disease interactions automatically. We attained this goal via applying text mining methods based on a deep learning model with a moderate curation cost. We also built a user-friendly web interface to allow researchers navigate and query desired information. Results: For curation, we manually constructed a golden-standard corpora (GSC) and a sliver-standard corpora (SSC) for microbe-disease interactions. Then we proposed a text mining framework for microbe-disease interaction extraction without having to build a model from scratch. Firstly, we applied named entity recognition (NER) tools to detect microbe and disease mentions from texts. Then we transferred a deep learning model BERE to recognize relations between entities, which was originally built for drug-target interactions or drug-drug interactions. The introduction of SSC for model ne-tuning greatly improves the performance of detection for microbe-disease interactions, with an average reduction in error of approximately 10%. The resulting MDIDB website offers data browsing, custom search for specific diseases or microbes as well as batch download. Conclusions: Evaluation results demonstrate that our method outperform the baseline model (rule-based PKDE4J) with an average F1-score of 73.81%. For further validation, we randomly sampled nearly 1,000 predicted interactions by our model, and manually checked the correctness of each interaction, which gives a 73% accuracy. The MDIDB webiste is freely avaliable throuth http://dbmdi.com/index/


2021 ◽  
pp. 691-706
Author(s):  
Shrinivas V. Shanbhag ◽  
Pratyush Karmakar ◽  
P. Prajwala ◽  
Nagamma Patil

Sign in / Sign up

Export Citation Format

Share Document