scholarly journals The Treasury Chest of Text Mining: Piling Available Resources for Powerful Biomedical Text Mining

BioChem ◽  
2021 ◽  
Vol 1 (2) ◽  
pp. 60-80
Author(s):  
Nícia Rosário-Ferreira ◽  
Catarina Marques-Pereira ◽  
Manuel Pires ◽  
Daniel Ramalhão ◽  
Nádia Pereira ◽  
...  

Text mining (TM) is a semi-automatized, multi-step process, able to turn unstructured into structured data. TM relevance has increased upon machine learning (ML) and deep learning (DL) algorithms’ application in its various steps. When applied to biomedical literature, text mining is named biomedical text mining and its specificity lies in both the type of analyzed documents and the language and concepts retrieved. The array of documents that can be used ranges from scientific literature to patents or clinical data, and the biomedical concepts often include, despite not being limited to genes, proteins, drugs, and diseases. This review aims to gather the leading tools for biomedical TM, summarily describing and systematizing them. We also surveyed several resources to compile the most valuable ones for each category.

2020 ◽  
Author(s):  
Samir Gupta ◽  
Shruti Rao ◽  
Trisha Miglani ◽  
Yasaswini Iyer ◽  
Junxia Lin ◽  
...  

AbstractInterpretation of a given variant’s pathogenicity is one of the most profound challenges to realizing the promise of genomic medicine. A large amount of information about associations between variants and diseases used by curators and researchers for interpreting variant pathogenicity is buried in biomedical literature. The development of text-mining tools that can extract relevant information from the literature will speed up and assist the variant interpretation curation process. In this work, we present a text-mining tool, MACE2k that extracts evidence sentences containing associations between variants and diseases from full-length PMC Open Access articles. We use different machine learning models (classical and deep learning) to identify evidence sentences with variant-disease associations. Evaluation shows promising results with the best F1-score of 82.9% and AUC-ROC of 73.9%. Classical ML models had a better recall (96.6% for Random Forest) compared to deep learning models. The deep learning model, Convolutional Neural Network had the best precision (75.6%), which is essential for any curation task.


Author(s):  
Jinhyuk Lee ◽  
Wonjin Yoon ◽  
Sungdong Kim ◽  
Donghyeon Kim ◽  
Sunkyu Kim ◽  
...  

Abstract Motivation Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. Results We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts. Availability and implementation We make the pre-trained weights of BioBERT freely available at https://github.com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/biobert.


2020 ◽  
Vol 29 (01) ◽  
pp. 225-225

Guan J, Li R, Yu S, Zhang X. A Method for Generating Synthetic Electronic Medical Record Text. IEEE/ACM Transact on Comput Biology and Inform 2019 https://ieeexplore.ieee.org/document/8880542 Lee J, Yoon W, Kim S, Kim D, Kim S, Ho So C, Kang J. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 2019;36(4):1234-40 https://academic.oup.com/bioinformatics/article/36/4/1234/5566506 Rosemblat G, Fiszman M, Shin D, Kılıçoğlu H. Towards a characterization of apparent contradictions in the biomedical literature using context analysis. J Biomed Inform 2019;98:103275 https://www.sciencedirect.com/science/article/abs/pii/S1532046419301947?via%3Dihub


Author(s):  
Mohamed Nadif ◽  
François Role

Abstract Biomedical scientific literature is growing at a very rapid pace, which makes increasingly difficult for human experts to spot the most relevant results hidden in the papers. Automatized information extraction tools based on text mining techniques are therefore needed to assist them in this task. In the last few years, deep neural networks-based techniques have significantly contributed to advance the state-of-the-art in this research area. Although the contribution to this progress made by supervised methods is relatively well-known, this is less so for other kinds of learning, namely unsupervised and self-supervised learning. Unsupervised learning is a kind of learning that does not require the cost of creating labels, which is very useful in the exploratory stages of a biomedical study where agile techniques are needed to rapidly explore many paths. In particular, clustering techniques applied to biomedical text mining allow to gather large sets of documents into more manageable groups. Deep learning techniques have allowed to produce new clustering-friendly representations of the data. On the other hand, self-supervised learning is a kind of supervised learning where the labels do not have to be manually created by humans, but are automatically derived from relations found in the input texts. In combination with innovative network architectures (e.g. transformer-based architectures), self-supervised techniques have allowed to design increasingly effective vector-based word representations (word embeddings). We show in this survey how word representations obtained in this way have proven to successfully interact with common supervised modules (e.g. classification networks) to whose performance they greatly contribute.


2018 ◽  
pp. 129-154
Author(s):  
Boya Xie ◽  
Qin Ding ◽  
Di Wu

Driven by the rapidly advancing techniques and increasing interests in biology and medicine, about 2,000 to 4,000 references are added daily to MEDLINE, the US national biomedical bibliographic database. Even for a specific research topic, extracting useful and comprehensive information out of the huge literature data pool is challenging. Text mining techniques become extremely useful when dealing with the abundant biomedical information and they have been applied to various areas in the realm of biomedical research. Instead of providing a brief overview of all text mining techniques and every major biomedical text mining application, this chapter explores in-depth the microRNA profiling area and related text mining tools. As an illustrative example, one rule-based text mining system developed by the authors is discussed in detail. This chapter also includes the discussion of the challenges and potential research areas in biomedical text mining.


Author(s):  
Princy Baby ◽  
Krishnapriya B

Sentiment Analysis is an ongoing field of research in text mining. Sentiment Analysis is the computational treatment of opinions, Sentiments, and subjectivity of text. Many recently proposed algorithms enhancements and various Sentiment Analysis applications are investigated and presented briefly in this survey. The related fields to Sentiment Analysis that attracted researchers recently are discussed. The main target of this survey is to give nearly full image of Sentiment Analysis techniques and the related fields with brief details. In recent years machine learning has received greater attention with the success of deep learning. Deep learning can create deep models of complex multivariate structures in structured data. Though deep learning can be characterized in several different ways, the most important is that deep learning can learn higher-order interactions among features using a cascade of many layers. Deep learning has been applied to neural networks and across many fields, with significant successes in many applications. Convolution neural networks, deep belief networks, and many other approaches have been proposed to enhance the abilities of deep structure networks


Sign in / Sign up

Export Citation Format

Share Document