medical language
Recently Published Documents


TOTAL DOCUMENTS

271
(FIVE YEARS 75)

H-INDEX

20
(FIVE YEARS 3)

2021 ◽  
Vol 268 ◽  
pp. 552-561
Author(s):  
Katherine M Reitz ◽  
Daniel E Hall ◽  
Myrick C Shinall ◽  
Paula K Shireman ◽  
Jonathan C Silverstein

2021 ◽  
Author(s):  
Ilya Tyagin ◽  
Ilya Safro

In this paper we present an approach for interpretable visualization of scientific hypotheses that is based on the idea of semantic concept interconnectivity, network-based and topic modeling methods. Our visualization approach has numerous adjustable parameters which provides the domain experts with additional flexibility in their decision making process. We also make use of the Unified Medical Language System metadata by integrating it directly into the resulting topics, and adding the variability into hypotheses resolution. To demonstrate the proposed approach in action, we deployed end-to-end hypothesis generation pipeline AGATHA, which was evaluated by BioCreative VII experts with COVID-19-related queries.


2021 ◽  
pp. 7-22
Author(s):  
Agnieszka Kiełkiewicz-Janowiak

Communication between patients and medical professionals is characterised by frequent misunderstandings due to medical, psychological, and relational considerations arising to a large extent from unfamiliarity with specialised medical language. Such processes can be exemplifi ed by the phrase pacjent nie współpracuje (lit. the patient is not cooperating), which has a specifi c meaning in medical language and can be interpreted by patients as evaluating them negatively. Understanding in this communication must be reached through negotiation of the meaning of specialist words, expressions, and phrases.


2021 ◽  
Vol 37 (2) ◽  
pp. 257-274
Author(s):  
Magdalena Łomzik

For over a decade, actions have been undertaken in Poland aimed at facilitating communication between citizens and offices based on the plain English principles. Texts formulated in this way are referred to as texts in plain Polish. In Germany, such studies and initiatives also cover medical records. This article aims to attempt to formulate a fragment of the epicrisis in plain Polish and to define the techniques used. The paper presents the key principles of formulating texts in plain language on the basis of the transformation of various types of specialist texts known from literature, selected German initiatives to simplify medical documentation and the characteristics of the Polish medical language and medical documentation. Next, a fragment of the epicrisis was simplified and the techniques used were described. In order to verify the effectiveness of the changes made in the text, the understandability of both texts was analyzed using the available applications.


2021 ◽  
Vol XXV (1) ◽  
pp. 15-28
Author(s):  
Olimpia Orządała

The aim of the paper was depiction the language of erotic scenes in Polish crime novels after 2000. The writers describe sex scenes in different ways. They use, among others, vulgarisms, euphemistic, metaphorical or biological and medical language. The analysis of the language was based on the books of such writers as Paulina Świst, Katarzyna Bonda, Małgorzata and Michał Kuźmińscy, as well as Gaja Grzegorzewska. Moreover, it was essential to describe these scenes’ functions and determine what the writers show in these scenes show and how they characterise the protagonists. The article includes references, among others theory of erotism of Georges Bataille.


2021 ◽  
Vol 17 (30) ◽  
pp. 24
Author(s):  
Franca Daniele

Medical communication and health communication are two close relatives in the field of communication, where medical communication is the mother and health communication is the offspring. Medical communication engages the delivery of scientific, medical, pharmaceutical and biotechnological information and data to health professionals like doctors, pharmacists, nurses, etc. The information includes updates on the latest discoveries provided by the international scientific community. Therefore, the source of this type of communication is represented by medical and scientific publications reporting data generated from basic science and clinical research. Health communications are targeted toward the general public, where the source is represented by health communicators and journalists. In health communications, information is the result of some kind of intra-language translation that allows transformation of the original medical language into a common language. Therefore, health communication derives from rewritings of a complex medical language that cannot always be modified and acquainted to serve the general public. The aim of the present work was to evaluate, in medical communications, the linguistic elements that represent the hard core for the general public. Thus, a qualitative evaluation was carried out on medical abstracts assessing medical terminology and compound phrases. The results of this investigation point out that these two linguistic traits of medical language are especially difficult for the general public due to their particular specialized nature.


2021 ◽  
Vol 30 (01) ◽  
pp. 189-189

Le DH. UFO: A tool for unifying biomedical ontology-based semantic similarity calculation, enrichment analysis and visualization. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0235670 Robinson PN, Ravanmehr V, Jacobsen JOB, Danis D, Zhang XA, Carmody LC, Gargano MA, Thaxton CL, Core UNCB, Karlebach G, Reese J, Holtgrewe M, Kohler S, McMurry JA, Haendel MA, Smedley D. Interpretable Clinical Genomics with a Likelihood Ratio Paradigm. https://www.cell.com/ajhg/fulltext/S0002-9297(20)30230-5 Slater LT, Gkoutos GV, Hoehndorf R. Towards semantic interoperability: finding and repairing hidden contradictions in biomedical ontologies. https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-020-01336-2 Zheng F, Shi J, Yang Y, Zheng WJ, Cui L. A transformation-based method for auditing the IS-A hierarchy of biomedical terminologies in the Unified Medical Language System. https://pubmed.ncbi.nlm.nih.gov/32918476/


2021 ◽  
Vol 21 (S2) ◽  
Author(s):  
Feihong Yang ◽  
Xuwen Wang ◽  
Hetong Ma ◽  
Jiao Li

Abstract Background Transformer is an attention-based architecture proven the state-of-the-art model in natural language processing (NLP). To reduce the difficulty of beginning to use transformer-based models in medical language understanding and expand the capability of the scikit-learn toolkit in deep learning, we proposed an easy to learn Python toolkit named transformers-sklearn. By wrapping the interfaces of transformers in only three functions (i.e., fit, score, and predict), transformers-sklearn combines the advantages of the transformers and scikit-learn toolkits. Methods In transformers-sklearn, three Python classes were implemented, namely, BERTologyClassifier for the classification task, BERTologyNERClassifier for the named entity recognition (NER) task, and BERTologyRegressor for the regression task. Each class contains three methods, i.e., fit for fine-tuning transformer-based models with the training dataset, score for evaluating the performance of the fine-tuned model, and predict for predicting the labels of the test dataset. transformers-sklearn is a user-friendly toolkit that (1) Is customizable via a few parameters (e.g., model_name_or_path and model_type), (2) Supports multilingual NLP tasks, and (3) Requires less coding. The input data format is automatically generated by transformers-sklearn with the annotated corpus. Newcomers only need to prepare the dataset. The model framework and training methods are predefined in transformers-sklearn. Results We collected four open-source medical language datasets, including TrialClassification for Chinese medical trial text multi label classification, BC5CDR for English biomedical text name entity recognition, DiabetesNER for Chinese diabetes entity recognition and BIOSSES for English biomedical sentence similarity estimation. In the four medical NLP tasks, the average code size of our script is 45 lines/task, which is one-sixth the size of transformers’ script. The experimental results show that transformers-sklearn based on pretrained BERT models achieved macro F1 scores of 0.8225, 0.8703 and 0.6908, respectively, on the TrialClassification, BC5CDR and DiabetesNER tasks and a Pearson correlation of 0.8260 on the BIOSSES task, which is consistent with the results of transformers. Conclusions The proposed toolkit could help newcomers address medical language understanding tasks using the scikit-learn coding style easily. The code and tutorials of transformers-sklearn are available at https://doi.org/10.5281/zenodo.4453803. In future, more medical language understanding tasks will be supported to improve the applications of transformers_sklearn.


Sign in / Sign up

Export Citation Format

Share Document