scholarly journals Modifier Ontologies for frequency, certainty, degree, and coverage phenotype modifier

2018 ◽  
Vol 6 ◽  
Author(s):  
Lorena Endara ◽  
Anne Thessen ◽  
Heather Cole ◽  
Ramona Walls ◽  
Georgios Gkoutos ◽  
...  

Background:When phenotypic characters are described in the literature, they may be constrained or clarified with additional information such as the location or degree of expression, these terms are called “modifiers”. With effort underway to convert narrative character descriptions to computable data, ontologies for such modifiers are needed. Such ontologies can also be used to guide term usage in future publications. Spatial and method modifiers are the subjects of ontologies that already have been developed or are under development. In this work, frequency (e.g., rarely, usually), certainty (e.g., probably, definitely), degree (e.g., slightly, extremely), and coverage modifiers (e.g., sparsely, entirely) are collected, reviewed, and used to create two modifier ontologies with different design considerations. The basic goal is to express the sequential relationships within a type of modifiers, for example, usually is more frequent than rarely, in order to allow data annotated with ontology terms to be classified accordingly.Method:Two designs are proposed for the ontology, both using the list pattern: a closed ordered list (i.e., five-bin design) and an open ordered list design. The five-bin design puts the modifier terms into a set of 5 fixed bins with interval object properties, for example, one_level_more/less_frequently_than, where new terms can only be added as synonyms to existing classes. The open list approach starts with 5 bins, but supports the extensibility of the list via ordinal properties, for example, more/less_frequently_than, allowing new terms to be inserted as a new class anywhere in the list. The consequences of the different design decisions are discussed in the paper. CharaParser was used to extract modifiers from plant, ant, and other taxonomic descriptions. After a manual screening, 130 modifier words were selected as the candidate terms for the modifier ontologies. Four curators/experts (three biologists and one information scientist specialized in biosemantics) reviewed and categorized the terms into 20 bins using the Ontology Term Organizer (OTO) (http://biosemantics.arizona.edu/OTO). Inter-curator variations were reviewed and expressed in the final ontologies.Results:Frequency, certainty, degree, and coverage terms with complete agreement among all curators were used as class labels or exact synonyms. Terms with different interpretations were either excluded or included using “broader synonym” or “not recommended” annotation properties. These annotations explicitly allow for the user to be aware of the semantic ambiguity associated with the terms and whether they should be used with caution or avoided. Expert categorization results showed that 16 out of 20 bins contained terms with full agreements, suggesting differentiating the modifiers into 5 levels/bins balances the need to differentiate modifiers and the need for the ontology to reflect user consensus. Two ontologies, developed using the Protege ontology editor, are made available as OWL files and can be downloaded from https://github.com/biosemantics/ontologies.Contribution:We built the first two modifier ontologies following a consensus-based approach with terms commonly used in taxonomic literature. The five-bin ontology has been used in the Explorer of Taxon Concepts web toolkit to compute the similarity between characters extracted from literature to facilitate taxon concepts alignments. The two ontologies will also be used in an ontology-informed authoring tool for taxonomists to facilitate consistency in modifier term usage.

Author(s):  
Zengwei Huo ◽  
Xin Geng

Zero-shot learning predicts new class even if no training data is available for that class. The solution to conventional zero-shot learning usually depends on side information such as attribute or text corpora. But these side information is not easy to obtain or use. Fortunately in many classification tasks, the class labels are ordered, and therefore closely related to each other. This paper deals with zero-shot learning for ordinal classification. The key idea is using label relevance to expand supervision information from seen labels to unseen labels. The proposed method SIDL generates a supervision intensity distribution (SID) that contains each label's supervision intensity, and then learns a mapping from instance to SID. Experiments on two typical ordinal classification problems, i.e., head pose estimation and age estimation, show that SIDL performs significantly better than the compared regression methods. Furthermore, SIDL appears much more robust against the increase of unseen labels than other compared baselines.


2018 ◽  
Vol 6 ◽  
pp. e21282 ◽  
Author(s):  
Maria Mora ◽  
José Araya

Taxonomic literature keeps records of the planet's biodiversity and gives access to the knowledge needed for its sustainable management. Unfortunately, most of the taxonomic information is available in scientific publications in text format. The amount of publications generated is very large; therefore, to process it in order to obtain high structured texts would be complex and very expensive. Approaches like citizen science may help the process by selecting whole fragments of texts dealing with morphological descriptions; but a deeper analysis, compatible with accepted ontologies, will require specialised tools. The Biodiversity Heritage Library (BHL) estimates that there are more than 120 million pages published in over 5.4 million books since 1469, plus about 800,000 monographs and 40,000 journal titles (12,500 of these are current titles).It is necessary to develop standards and software tools to extract, integrate and publish this information into existing free and open access repositories of biodiversity knowledge to support science, education and biodiversity conservation.This document presents an algorithm based on computational linguistics techniques to extract structured information from morphological descriptions of plants written in Spanish. The developed algorithm is based on the work of Dr. Hong Cui from the University of Arizona; it uses semantic analysis, ontologies and a repository of knowledge acquired from the same descriptions. The algorithm was applied to the books Trees of Costa Rica Volume III (TCRv3), Trees of Costa Rica Volume IV (TCRv4) and to a subset of descriptions of the Manual of Plants of Costa Rica (MPCR) with very competitive results (more than 92.5% of average performance). The system receives the morphological descriptions in tabular format and generates XML documents. The XML schema allows documenting structures, characters and relations between characters and structures. Each extracted object is associated with attributes like name, value, modifiers, restrictions, ontology term id, amongst other attributes.The implemented tool is free software. It was developed using Java and integrates existing technology as FreeLing, the Plant Ontology (PO), the Plant Glossary, the Ontology Term Organizer (OTO) and the Flora Mesoamericana English-Spanish Glossary.


Author(s):  
Umamaheswari G. ◽  
Ramya T. ◽  
Chaitra V.

Background: Nonimmune hydrops foetalis (NIHF) is a terminal catastrophic event of pregnancy caused by numerous diverse etiology. The aim of this study was to assess the significance of foetal autopsy and to compare the prenatal ultrasound (USG) and foetal autopsy findings in cases of NIHF.Methods: All perinatal autopsies performed at the department of pathology between March 2011-February 2018 were retrospectively reviewed. Of the received 130 autopsies, twenty cases of NIHF were identified, records of which were collected and correlated with maternal medical history, prenatal imaging and autopsy findings.Results: The malformations with hydrops foetalis were classified according to the involved organ system. They were cardiothoracic (7/20 cases), genitourinary (3/20 cases), gastrointestinal lesions (1/20 cases), chromosomal (4/20 cases) and multisystem anomaly/syndromic association (5/20 cases). Complete agreement between USG and autopsy was seen in 8 (40%) cases. In 5 (25%) cases autopsy findings were in total disagreement with USG diagnosis. The rest of the 7 (35%) cases, autopsy revealed additional information and changed the recurrence risk in two cases.Conclusions: Present study demonstrates the high rate of discordancy between USG and autopsy examination in cases complicated by NIHF. Foetal autopsy confirms the USG findings (quality control/audit), adds additional findings or changes the final diagnosis, which helps in redefining the recurrence risk and plausible genetic counselling for future pregnancies. Hence present study underscores the need for autopsy in all cases of NIHF.


2000 ◽  
Vol 34 (6) ◽  
pp. 798-801 ◽  
Author(s):  
Kevin J Chapple ◽  
Anne E Hendrick ◽  
Michelle W McCarthy

OBJECTIVE: To evaluate the efficacy of zanamivir in the prevention and treatment of influenza. DATA SOURCES: Medical literature was accessed through MEDLINE (1966–June 1999). Key search terms included zanamivir, GG167, and influenza. Additional information was obtained from GlaxoWellcome, Inc. DATA SYNTHESIS: Zanamivir is the first in a new class of drugs to be developed for the treatment of influenza. An evaluation of clinical trials using inhaled zanamivir was conducted to determine its efficacy. CONCLUSIONS: Zanamivir appears to shorten the median duration of influenza symptoms by up to 2.5 days when compared with placebo. It was well tolerated in clinical trials, with mild adverse effects occurring in a small percentage of subjects.


2013 ◽  
Vol 63 (3) ◽  
Author(s):  
Johannes Fähndrich ◽  
Sebastian Ahrndt ◽  
Sahin Albayrak

This work advocate self-explanation as one foundation of self-* properties. Arguing that for system component to become more self-explanatory the underlining foundation is an awareness of themselves and their environment. In the research area of adaptive software, self-* properties have shifted into focus caused by the tendency to push ever more design decisions to the applications runtime. Thus fostering new paradigms for system development like intelligent and learning agents. This work surveys the state-of-the-art methods of self-explanation in software systems and distills a definition of self-explanation. Additionally, we introduce a measure to compare explanations and propose an approach for the first steps towards extending descriptions to become more explanatory. The conclusion shows that explanation is a special kind of description. The kind of description that provides additional information about a subject of interest and is understandable for the audience of the explanation. Further the explanation is dependent on the context it is used in, which brings about that one explanation can transport different information in different contexts. The proposed measure reflects those requirements.


2010 ◽  
Vol 132 (10) ◽  
Author(s):  
Chiradeep Sen ◽  
Farhad Ameri ◽  
Joshua D. Summers

This paper presents a mathematical model for quantifying uncertainty of a discrete design solution and to monitor it through the design process. In the presented entropic view, uncertainty is highest at the beginning of the process as little information is known about the solution. As additional information is acquired or generated, the solution becomes increasingly well-defined and uncertainty reduces, finally diminishing to zero at the end of the process when the design is fully defined. In previous research, three components of design complexity—size, coupling, and solvability—were identified. In this research, these metrics are used to model solution uncertainty based on the search spaces of the variables (size) and the compatibility between variable values (coupling). Solvability of the variables is assumed uniform for simplicity. Design decisions are modeled as choosing a value, or a reduced set of values, from the existing search space of a variable, thus, reducing its uncertainty. Coupling is measured as the reduction of a variable’s search space as an effect of reducing the search space of another variable. This model is then used to monitor uncertainty reduction through a design process, leading to three strategies that prescribe deciding the variables in the order of their uncertainty, number of dependents, or the influence of on other variables. Comparison between these strategies shows how size and coupling of variables in a design can be used to determine task sequencing strategy for fast design convergence.


Author(s):  
Chiradeep Sen ◽  
Farhad Ameri ◽  
Joshua D. Summers

Early stages of engineering design processes are characterized by high levels of uncertainty due to incomplete knowledge. As the design progresses, additional information is externally added or internally generated within the design process. As a result, the design solution becomes increasingly well-defined and the uncertainty of the problem reduces, diminishing to zero at the end of the process when the design is fully defined. In this research a measure of uncertainty is proposed for a class of engineering design problems called discrete design problems. Previously, three components of complexity in engineering design, namely, size, coupling and solvability, were identified. In this research uncertainty is measured in terms of the number of design variables (size) and the dependency between the variables (coupling). The solvability of each variable is assumed to be uniform for the sake of simplicity. The dependency between two variables is measured as the effect of a decision made on one variable on the solution options available to the other variable. A measure of uncertainty is developed based on this premise, and applied to an example problem to monitor uncertainty reduction through the design process. Results are used to identify and compare three task-sequencing strategies in engineering design.


1979 ◽  
Vol 46 ◽  
pp. 368
Author(s):  
Clinton B. Ford

A “new charts program” for the Americal Association of Variable Star Observers was instigated in 1966 via the gift to the Association of the complete variable star observing records, charts, photographs, etc. of the late Prof. Charles P. Olivier of the University of Pennsylvania (USA). Adequate material covering about 60 variables, not previously charted by the AAVSO, was included in this original data, and was suitably charted in reproducible standard format.Since 1966, much additional information has been assembled from other sources, three Catalogs have been issued which list the new or revised charts produced, and which specify how copies of same may be obtained. The latest such Catalog is dated June 1978, and lists 670 different charts covering a total of 611 variables none of which was charted in reproducible standard form previous to 1966.


Author(s):  
G. Lehmpfuhl

Introduction In electron microscopic investigations of crystalline specimens the direct observation of the electron diffraction pattern gives additional information about the specimen. The quality of this information depends on the quality of the crystals or the crystal area contributing to the diffraction pattern. By selected area diffraction in a conventional electron microscope, specimen areas as small as 1 µ in diameter can be investigated. It is well known that crystal areas of that size which must be thin enough (in the order of 1000 Å) for electron microscopic investigations are normally somewhat distorted by bending, or they are not homogeneous. Furthermore, the crystal surface is not well defined over such a large area. These are facts which cause reduction of information in the diffraction pattern. The intensity of a diffraction spot, for example, depends on the crystal thickness. If the thickness is not uniform over the investigated area, one observes an averaged intensity, so that the intensity distribution in the diffraction pattern cannot be used for an analysis unless additional information is available.


Author(s):  
Eva-Maria Mandelkow ◽  
Eckhard Mandelkow ◽  
Joan Bordas

When a solution of microtubule protein is changed from non-polymerising to polymerising conditions (e.g. by temperature jump or mixing with GTP) there is a series of structural transitions preceding microtubule growth. These have been detected by time-resolved X-ray scattering using synchrotron radiation, and they may be classified into pre-nucleation and nucleation events. X-ray patterns are good indicators for the average behavior of the particles in solution, but they are difficult to interpret unless additional information on their structure is available. We therefore studied the assembly process by electron microscopy under conditions approaching those of the X-ray experiment. There are two difficulties in the EM approach: One is that the particles important for assembly are usually small and not very regular and therefore tend to be overlooked. Secondly EM specimens require low concentrations which favor disassembly of the particles one wants to observe since there is a dynamic equilibrium between polymers and subunits.


Sign in / Sign up

Export Citation Format

Share Document