scholarly journals All the Clades in the World: Building a Semantically-Rich and Testable Ontology of Phylogenetic Clade Definitions

2018 ◽  
Vol 2 ◽  
pp. e25776
Author(s):  
Gaurav Vaidya ◽  
Guanyang Zhang ◽  
Hilmar Lapp ◽  
Nico Cellinese

Taxonomic names are ambiguous as identifiers of biodiversity data, as they refer to a particular concept of a taxon in an expert’s mind (Kennedy et al. 2005). This ambiguity is particularly problematic when attempting to reconcile taxonomic names from disparate sources with clades on a phylogeny. Currently, such reconciliation requires expert interpretation, which is necessarily subjective, difficult to reproduce, and refractory to scaling. In contrast, phylogenetic clade definitions are a well-developed method for unambiguously defining the semantics of a clade concept in terms of shared evolutionary ancestry (Queiroz and Gauthier 1990, Queiroz and Gauthier 1994), and these semantics allow locating clades on any phylogeny. Although a few software tools have been created for resolving clade definitions, including for definitions expressed in the Mathematical Markup Language (e.g. Names on Nodes in Keesey 2007) and as lists of GenBank accession numbers (e.g. mor in Hibbett et al. 2005), these are application-specific representations that do not provide formal definitions with well-defined semantics for every component of a clade definition. Being able to create such machine-interpretable definitions would allow computers to store, compare, distribute and resolve semantically-rich clade definitions. To this end, the Phyloreferencing project (http://phyloref.org, Cellinese and Lapp 2015) is working on a specification for encoding phylogenetic clade definitions as ontologies using the Web Ontology Language (OWL in W3C OWL Working Group 2012). Our specification allows the semantics of these definitions, which we call phyloreferences, to be described in terms of shared ancestor and excluded lineage properties. The aim of this effort is to allow any OWL-DL reasoner to resolve phyloreferences on a phylogeny that has itself been translated into a compatible OWL representation. We have developed a workflow that allows us to curate phyloreferences from phylogenetic clade definitions published in natural language, and to resolve the curated phyloreference against the phylogeny upon which the definition was originally created, allowing us to validate that the phyloreference reflects the authors’ original intent. We have started work on curating dozens of phyloreferences from publications and the clade definition database RegNum (http://phyloregnum.org), which will provide an online catalog of all clade definitions that are part of the Phylonym Volume, to be published together with the PhyloCode (https://www.ohio.edu/phylocode/). We will comprehensively curate these definitions into a reusable and fully computable ontology of phyloreferences. In our presentation, we will provide an overview of phyloreferencing and will describe the model and workflow we use to encode clade definitions in OWL, based on concepts and terms taken from the Comparative Data Analysis Ontology (Prosdocimi et al. 2009), Darwin-SW (Baskauf and Webb 2016) and Darwin Core (Wieczorek et al. 2012). We will demonstrate how phyloreferences can be visualized, resolved and tested on the phylogeny that they were originally described on, and how they resolve on one of the largest synthetic phylogenies available, the Open Tree of Life (Hinchliff et al. 2015). We will conclude with a discussion of the problems we faced in referring to taxonomic units in phylogenies, which is one of the key challenges in enabling better integration of phylogenetic information into biodiversity analyses.

Communicology ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 138-148
Author(s):  
NATALIA MALSHINA ◽  

This study examines the ontological problems in the aspect of the ratio of different cognitive practices and their mutual conditionality in the context of communication and their socio-cultural prerequisites, which is possible only if the traditional approach to the distinction between epistemology and faith is revised. Based on the idea of identity of common grounds of cognitive practices “belief” is included in the understanding of interpretation in the communicative situation for true knowledge in each of the modes of being. Belief in the philosophical tradition reveals the ontological foundations of hermeneutics. Three reflections are synthesised: the hermeneutic concept of understanding, the structuralist concept of language, and the psychoanalytic concept of personality. It is necessary to apply the method of phenomenological reduction to the ontological substantiation of hermeneutics in the Christian Orthodox tradition. Hence, the very natural seems the meeting of semantics, linguistics, and onomatodoxy, with the ontology language of Heidegger, the origins of which resides in in Husserl phenomenology. Fundamental ontology and linguistics, cult philosophy - both in different ways open the horizons of substantiation of hermeneutics. The beginning of this justification is the hermeneutic problem in Christianity, which has appeared as a sequence of the question of the relationship between the two Covenants, or two Unions. In the paper, the author attempts to identify the stages of constructing the philosophical concept of Pavel Florensky. As a result, the substantiation of the birth of the world in consciousness by the cult is revealed. Ontological tradenote words can be seen in Florensky through symbols. The symbol makes the transition from a small energy to a larger one, from a small information saturation to a greater one, acting as a lumen of being - when by the name we hear the reality. The word comes into contact with the world that is on the other side of our own psychological state. The word, the symbol shifts all the time from subjective to objective. The communicative model acts as a common point uniting these traditions. The religious approach as part of semiotic approach reveals the horizons of ontological conditionality of language and words, and among the words - the name, as the name plays a central role in the accumulation and transmission of information, understanding of the commonality of this conditionality in the concepts of phenomenology and Christian, Orthodox tradition.


1994 ◽  
Vol 05 (05) ◽  
pp. 805-809 ◽  
Author(s):  
SALIM G. ANSARI ◽  
PAOLO GIOMMI ◽  
ALBERTO MICOL

On 3rd November, 1993, ESIS announced its Homepage on the World Wide Web (WWW) to the user community. Ever since then, ESIS has steadily increased its Web support to the astronomical community to include a bibliographic service, the ESIS catalogue documentation and the ESIS Data Browser. More functionality will be added in the near future. All these services share a common ESIS structure that is used by other ESIS user paradigms such as the ESIS Graphical User Interface (Giommi and Ansari, 1993), and the ESIS Command Line Interface. A forms-based paradigm, each ESIS-Web application interfaces to the hypertext transfer protocol (http) translating queries from/to the hypertext markup language (html) format understood by the NCSA Mosaic interface. In this paper, we discuss the ESIS system and show how each ESIS service works on the World Wide Web client.


Author(s):  
Mary Barkworth ◽  
Mushtaq Ahmad ◽  
Mudassir Asrar ◽  
Raza Bhatti ◽  
Neil Cobb ◽  
...  

In 2017, funding from the Biodiversity Information Fund for Asia accelerated data mobilization and georeferencing by Pakistani herbaria. The funding directly benefited only two herbaria but, by the end of the project 9 herbaria were involved in sharing data, 2 through GBIF (ISL 2019, SINDH 2019; codes according to Index herbariorum) and 6 others (BANNU 2019, BGH 2019, PUP 2019, QUETTA 2019, RAW 2019, SWAT 2019) through OpenHerbarium, a Symbiota based network. Eventually, all collections in OpenHerbarium are expected to become GBIF data providers. Additional Pakistani herbaria are being introduced to data mobilization and several individuals have expressed interest in learning to use OpenHerbarium to generated documented checklists for teaching and research and others for learning to link information in OpenHerbarium to other resources. These are the first steps to developing a “a large group of individuals … to train, mentor, and champion [biodiversity] data use” in Pakistan, but it is important to remember that good bioidiversity data starts in the field. We need to provide today’s collectors and educators with easy access to a) information about what constitutes a high-quality herbarium specimen; b) tools for making it easier to record and provide high quality specimen data; c) simple mechanisms for sharing data in ways that provide immediately useful resources; and d) learning to make use of the data becoming available. OpenHerbarium addresses the third and fourth needs and also makes it simple for collections to become GBIF data providers. This year, the focus will be on first two of the three steps identified. Introduction of the new resources will be used to introduce collectors and educators to the ideas underlying provision of biodiversity data that is fit for use and reuse. When Symbiota2 is functional, OpenHerbarium will be moved to that system. This will encourage development of additional tools for using biodiversity data. All these activities are essential to helping spread understanding of the concepts integral to biodiversity informatics. It is, of course, possible “to train, build, and champion data use” using data for other parts of the world, or provided by institutions from other parts of the world, but embedding good biodiversity data practices into the fabric of a country’s biodiversity education and research activities better benefits the country if a substantial portion of the data is generated from within the country. It also helps to spread knowledge of the country’s biodiversity among its students. Consequently, our focus in developing Pakistan’s capacity in biodiversity informatics is on engaging collections and collectors in sharing biodiversity data, then helping them discover, use, and create methods for developing the insights needed to encourage wise use of the country’s biological resources, and encouraging interaction. This will lead to a “community of practice” within Pakistan that can both benefit from and contribute to an international “community of practice”.


Author(s):  
Lauren Weatherdon

Ensuring that we have the data and information necessary to make informed decisions is a core requirement in an era of increasing complexity and anthropogenic impact. With cumulative challenges such as the decline in biodiversity and accelerating climate change, the need for spatially-explicit and methodologically-consistent data that can be compiled to produce useful and reliable indicators of biological change and ecosystem health is growing. Technological advances—including satellite imagery—are beginning to make this a reality, yet uptake of biodiversity information standards and scaling of data to ensure its applicability at multiple levels of decision-making are still in progress. The complementary Essential Biodiversity Variables (EBVs) and Essential Ocean Variables (EOVs), combined with Darwin Core and other data and metadata standards, provide the underpinnings necessary to produce data that can inform indicators. However, perhaps the largest challenge in developing global, biological change indicators is achieving consistent and holistic coverage over time, with recognition of biodiversity data as global assets that are critical to tracking progress toward the UN Sustainable Development Goals and Targets set by the international community (see Jensen and Campbell (2019) for discussion). Through this talk, I will describe some of the efforts towards producing and collating effective biodiversity indicators, such as those based on authoritative datasets like the World Database on Protected Areas (https://www.protectedplanet.net/), and work achieved through the Biodiversity Indicators Partnership (https://www.bipindicators.net/). I will also highlight some of the characteristics of effective indicators, and global biodiversity reporting and communication needs as we approach 2020 and beyond.


Author(s):  
Tino Jahnke ◽  
Juergen Seitz

In order to solve intellectual property problems of the digital age, two basic procedures are used: “Buy and drop,” linked to the destruction of various peer-to-peer solutions and “subpoena and fear,” as the creation of non-natural social fear by specific legislations. Although customers around the world are willing to buy digital products over networks, the industry is still using conventional procedures to push such a decisive customer impulse back into existing and conventional markets. Digital watermarking is described as a possibility to interface and close the gap between copyright and digital distribution. It is based on steganographic techniques and enables useful right protection mechanisms. Digital watermarks are mostly inserted as a plain bit sample or a transformed digital signal into the source data using a key based embedding algorithm and a pseudo-noise pattern. The embedded information is hidden in low-value bits or least significant bits of picture pixels, frequency or other value domains, and linked inseparably with the source of the data structure. For the optimal application of watermarking technology a trade-off has to be made between competing criteria like robustness, non-perceptibility, non-delectability, and security. Most watermarking algorithms are resistant against selected and application-specific attacks. Therefore, even friendly attacks in the form of usual file and data modifications can destroy easily the watermark or falsify it. This chapter gives an overview in watermarking technologies, classification, methodology, applications and problems.


1995 ◽  
Vol 1 (2) ◽  
pp. 195-203
Author(s):  
John Fletcher ◽  
John Latham

Tourism activity and economic performance, particularly that of the major tourism generating and receiving countries, are closely linked. The purpose of this section is to provide those indicators which are regarded as being most relevant to international movements and spend. In one issue each year there will appear information relating to the global picture and by region, together with some detail on the most significant nations. The databank paper in Volume 1, Number 1 was the first in this category. Throughout the remainder of the year each issue will concentrate on a specific world region — in this case, economic indicators of tourism in Europe are provided and accompanied by a brief commentary. The main sources of data are those statistics published by the World Tourism Organisation (WTO), the World Travel and Tourism Council (WTTC), Eurostat and the Organisation for Economic Cooperation and Development (OECD). Formal definitions associated with the tables presented are often detailed and lengthy and so are not included here. The reader should consult the source material if necessary.


1998 ◽  
Vol 54 (6) ◽  
pp. 1065-1070 ◽  
Author(s):  
Peter Murray-Rust

The rapid growth of the World Wide Web provides major new opportunities for distributed databases, especially in macromolecular science. A new generation of technology, based on structured documents (SD), is being developed which will integrate documents and data in a seamless manner. This offers experimentalists the chance to publish and archive high-quality data from any discipline. Data and documents from different disciplines can be combined and searched using technology such as eXtensible Markup Language (XML) and its associated support for hypermedia (XLL), metadata (RDF) and stylesheets (XSL). Opportunities in crystallography and related disciplines are described.


2007 ◽  
Vol 4 (3) ◽  
pp. 252-263 ◽  
Author(s):  
Allyson L. Lister ◽  
Matthew Pocock ◽  
Anil Wipat

Abstract The creation of quantitative, simulatable, Systems Biology Markup Language (SBML) models that accurately simulate the system under study is a time-intensive manual process that requires careful checking. Currently, the rules and constraints of model creation, curation, and annotation are distributed over at least three separate documents: the SBML schema document (XSD), the Systems Biology Ontology (SBO), and the “Structures and Facilities for Model Definition” document. The latter document contains the richest set of constraints on models, and yet it is not amenable to computational processing. We have developed a Web Ontology Language (OWL) knowledge base that integrates these three structure documents, and that contains a representative sample of the information contained within them. This Model Format OWL (MFO) performs both structural and constraint integration and can be reasoned over and validated. SBML Models are represented as individuals of OWL classes, resulting in a single computationally amenable resource for model checking. Knowledge that was only accessible to humans is now explicitly and directly available for computational approaches. The integration of all structural knowledge for SBML models into a single resource creates a new style of model development and checking.


2007 ◽  
Vol 22 (3) ◽  
pp. 255-268 ◽  
Author(s):  
MARCELO TALLIS ◽  
RAND WALTZMAN ◽  
ROBERT BLAZER

AbstractWe exploit the spreadsheet metaphor to make deductive problem-solving methods available to the vast population of spreadsheet end-users. In particular, we show how the function-based problem-solving capabilities of spreadsheets can be extended to include logical deductive methods in a way that is consistent with the existing spreadsheet ‘look and feel’. The foundation of our approach is the integration of a standard deductive logic system into a successful Commercial-Off-The-Shelf (COTS) spreadsheet. We have demonstrated this by designing and implementing an extension to Excel that manages the integration of Excel and a deductive logic engine based on the World Wide Web Consortium (W3C) standard ontology language OWL + SWRL.


1996 ◽  
Vol 26 (3) ◽  
pp. 333-366 ◽  
Author(s):  
John D. Norton

Whatever the original intent, the introduction of the term ‘thought experiment’ has proved to be one of the great public relations coups of science writing. For generations of readers of scientific literature, the term has planted the seed of hope that the fragment of text they have just read is more than mundane. Because it was a thought experiment, does it not tap into that infallible font of all wisdom in empiricist science, the experiment? And because it was conducted in thought, does it not miraculously escape the need for the elaborate laboratories and bloated budgets of experimental science?These questions in effect pose the epistemological problem of thought experiments in the sciences:Thought experiments are supposed to give us information about our physical world. From where can this information come?One enticing response to the problem is to imagine that thought experiments draw from some special source of knowledge of the world that transcends our ordinary epistemic resources.


Sign in / Sign up

Export Citation Format

Share Document