scholarly journals O-JMeSH: creating a bilingual English-Japanese controlled vocabulary of MeSH UIDs through machine translation and mutual information

2021 ◽  
Vol 19 (3) ◽  
pp. e26
Author(s):  
Felipe Soares ◽  
Yuka Tateisi ◽  
Terue Takatsuki ◽  
Atsuko Yamaguchi

Previous approaches to create a controlled vocabulary for Japanese have resorted to existing bilingual dictionary and transformation rules to allow such mappings. However, given the possible new terms introduced due to coronavirus disease 2019 (COVID-19) and the emphasis on respiratory and infection-related terms, coverage might not be guaranteed. We propose creating a Japanese bilingual controlled vocabulary based on MeSH terms assigned to COVID-19 related publications in this work. For such, we resorted to manual curation of several bilingual dictionaries and a computational approach based on machine translation of sentences containing such terms and the ranking of possible translations for the individual terms by mutual information. Our results show that we achieved nearly 99% occurrence coverage in LitCovid, while our computational approach presented average accuracy of 63.33% for all terms, and 84.51% for drugs and chemicals.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Friedemann Krentel ◽  
Franziska Singer ◽  
María Lourdes Rosano-Gonzalez ◽  
Ewan A. Gibb ◽  
Yang Liu ◽  
...  

AbstractImproved and cheaper molecular diagnostics allow the shift from “one size fits all” therapies to personalised treatments targeting the individual tumor. However, the wealth of potential targets based on comprehensive sequencing remains a yet unsolved challenge that prevents its routine use in clinical practice. Thus, we designed a workflow that selects the most promising treatment targets based on multi-omics sequencing and in silico drug prediction. In this study we demonstrate the workflow with focus on bladder cancer (BLCA), as there are, to date, no reliable diagnostics available to predict the potential benefit of a therapeutic approach. Within the TCGA-BLCA cohort, our workflow identified a panel of 21 genes and 72 drugs that suggested personalized treatment for 95% of patients—including five genes not yet reported as prognostic markers for clinical testing in BLCA. The automated predictions were complemented by manually curated data, thus allowing for accurate sensitivity- or resistance-directed drug response predictions. We discuss potential improvements of drug-gene interaction databases on the basis of pitfalls that were identified during manual curation.


Author(s):  
Jamie C. Gorman ◽  
David A. Grimm ◽  
Ronald H. Stevens ◽  
Trysha Galloway ◽  
Ann M. Willemsen-Dunlap ◽  
...  

Objective A method for detecting real-time changes in team cognition in the form of significant communication reorganizations is described. We demonstrate the method in the context of scenario-based simulation training. Background We present the dynamical view that individual- and team-level aspects of team cognition are temporally intertwined in a team’s real-time response to challenging events. We suggest that this real-time response represents a fundamental team cognitive skill regarding the rapidity and appropriateness of the response, and methods and metrics are needed to track this skill. Method Communication data from medical teams (Study 1) and submarine crews (Study 2) were analyzed for significant communication reorganization in response to training events. Mutual information between team members informed post hoc filtering to identify which team members contributed to reorganization. Results Significant communication reorganizations corresponding to challenging training events were detected for all teams. Less experienced teams tended to show delayed and sometimes ineffective responses that more experienced teams did not. Mutual information and post hoc filtering identified the individual-level inputs driving reorganization and potential mechanisms (e.g., leadership emergence, role restructuring) underlying reorganization. Conclusion The ability of teams to rapidly and effectively reorganize coordination patterns as the situation demands is a team cognitive skill that can be measured and tracked. Application Potential applications include team monitoring and assessment that would allow for visualization of a team’s real-time response and provide individualized feedback based on team member’s contributions to the team response.


Author(s):  
Jiayu Zhou ◽  
Shi Wang ◽  
Cungen Cao

Chinese information processing is a critical step toward cognitive linguistic applications like machine translation. Lexical hyponymy relation, which exists in some Eastern languages like Chinese, is a kind of hyponymy that can be directly inferred from the lexical compositions of concepts, and of great importance in ontology learning. However, a key problem is that the lexical hyponymy is so commonsense that it cannot be discovered by any existing acquisition methods. In this paper, we systematically define lexical hyponymy relationship, its linguistic features and propose a computational approach to semi-automatically learn hierarchical lexical hyponymy relations from a large-scale concept set, instead of analyzing lexical structures of concepts. Our novel approach discovered lexical hyponymy relation by examining statistic features in a Common Suffix Tree. The experimental results show that our approach can correctly discover most lexical hyponymy relations in a given large-scale concept set.


Author(s):  
Edmund T. Rolls

In Chapter 5, Edmund T. Rolls builds on evidence and theories he developed elsewhere about the neural base of emotions and explores what they can tell us about purpose, meaning, and morals. He argues that meaning can be achieved by neural representations not only if these representations have mutual information with objects and events in the world, but also by virtue of the goals of the “selfish” genes and of the individual reasoner. This, he proposes, provides a means for even symbolic representations to be grounded in the world. He concludes by arguing that morals can be considered as principles that are underpinned by (the sometimes different) biological goals specified by the genes and by the reasoning (rational) system. Given that what is “natural” does not correspond to what is “right,” he argues that these conflicts within and between individuals can be addressed by a social contract.


1986 ◽  
Vol 30 ◽  
pp. 373-382 ◽  
Author(s):  
W. Parrish ◽  
M. Hart ◽  
T. C Huang ◽  
M. Bellotto

AbstractA method for using synchrotron-radiation parallel-beam X-ray diffractometry for precision measurement of scattering angles and lattice parameters is described. The important advantages of the method are the high P/B made possible by wavelength selection and high source intensity, the symmetrical profiles and the absence of most systematic errors making it unnecessary to use standards. Profile fitting with a pseudo-Voigt function is used to determine 2θ to 0.0001º. The zero-angle correction and lattice parameter were determined from least-squares refinement and the average accuracy of observed-calculated 2θs was 0.0020°. Average values of ∆d/d = ∆a/a directly calculated from the individual hkl measurements ranged from 2x 10-5 to 5.7 x 10-5. The precision estimated from the standard deviation of the mean is in the 10-6 range and 1 ppm precision was obtained for Si. The determination of the exact wavelength selected remains to be solved, but ratios of lattice spacings to standards such as NBS SRM 640a can be determined.


Genus ◽  
2020 ◽  
Vol 76 (1) ◽  
Author(s):  
Han Lin Shang ◽  
Heather Booth

Abstract Accuracy in fertility forecasting has proved challenging and warrants renewed attention. One way to improve accuracy is to combine the strengths of a set of existing models through model averaging. The model-averaged forecast is derived using empirical model weights that optimise forecast accuracy at each forecast horizon based on historical data. We apply model averaging to fertility forecasting for the first time, using data for 17 countries and six models. Four model-averaging methods are compared: frequentist, Bayesian, model confidence set, and equal weights. We compute individual-model and model-averaged point and interval forecasts at horizons of one to 20 years. We demonstrate gains in average accuracy of 4–23% for point forecasts and 3–24% for interval forecasts, with greater gains from the frequentist and equal weights approaches at longer horizons. Data for England and Wales are used to illustrate model averaging in forecasting age-specific fertility to 2036. The advantages and further potential of model averaging for fertility forecasting are discussed. As the accuracy of model-averaged forecasts depends on the accuracy of the individual models, there is ongoing need to develop better models of fertility for use in forecasting and model averaging. We conclude that model averaging holds considerable promise for the improvement of fertility forecasting in a systematic way using existing models and warrants further investigation.


2021 ◽  
Vol 03 (02) ◽  
pp. 204-208
Author(s):  
Ielaf O. Abdul-Majjed DAHL

In the past decade, the field of facial expression recognition has attracted the attention of scientists who play an important role in enhancing interaction between human and computers. The issue of facial expression recognition is not a simple matter of machine learning, because expression of the individual differs from one person to another based on the various contexts, backgrounds and lighting. The goal of the current system was to achieve the highest rate for two facial expressions ("happy" and "sad") The objective of the current work was to attain the highest rate in classification with computer vision algorithms for two facial expressions ("happy" and "sad"). This was accomplished through several phases started from image pre-processing to the Gabor filter extraction, which was then used for the extraction of important characteristics with mutual information. The expression was finally recognized by a support vector classifier. Cohn-Kanade database and JAFFE data base have been trained and checked. The rates achieved by the qualified data package were 81.09% and 92.85% respectively.


2020 ◽  
Author(s):  
Terry Gao

Abstract To speed up the discovery of COVID-19 disease mechanisms by X-ray images, this research developed a new diagnosis platform using a deep convolutional neural network (CNN) that is able to assist radiologists with diagnosis by distinguishing COVID-19 pneumonia from non-COVID-19 pneumonia in patients based on chest X-ray classification and analysis. Such a tool can save time in interpreting chest X-rays and increase the accuracy and thereby enhance our medical capacity for the detection and diagnosis of COVID-19. The research idea is that a set of X-ray medical lung images (which include normal, infected by bacteria, infected by virus including COVID-19) are used to train a deep CNN that can distinguish between the noise and the useful information and then uses this training to interpret new images by recognizing patterns that indicate certain diseases such as coronavirus infection in the individual images. The supervised learning method is used as the process of learning from the training dataset and can be thought of as a doctor supervising the learning process. It becomes more accurate as the number of analyzed images grows, and the average accuracy is above 95%. In this way, it imitates the training for a doctor, but the theory is that since it is capable of learning from a far larger set of images than any human, it can have the potential of being more accurate.


Sign in / Sign up

Export Citation Format

Share Document