scholarly journals Avenues into Integration: Communicating taxonomic intelligence from sender to recipient

Author(s):  
Beckett Sterner ◽  
Nathan Upham ◽  
Atriya Sen ◽  
Nico Franz

“What is crucial for your ability to communicate with me… pivots on the recipient’s capacity to interpret—to make good inferential sense of the meanings that the declarer is able to send” (Rescher 2000, p148). Conventional approaches to reconciling taxonomic information in biodiversity databases have been based on string matching for unique taxonomic name combinations (Kindt 2020, Norman et al. 2020). However, in their original context, these names pertain to specific usages or taxonomic concepts, which can subsequently vary for the same name as applied by different authors. Name-based synonym matching is a helpful first step (Guala 2016, Correia et al. 2018), but may still leave considerable ambiguity regarding proper usage (Fig. 1). Therefore, developing "taxonomic intelligence" is the bioinformatic challenge to adequately represent, and subsequently propagate, this complex name/usage interaction across trusted biodiversity data networks. How do we ensure that senders and recipients of biodiversity data not only can share messages but do so with “good inferential sense” of their respective meanings? Key obstacles have involved dealing with the complexity of taxonomic name/usage modifications through time, both in terms of accounting for and digitally representing the long histories of taxonomic change in most lineages. An important critique of proposals to use name-to-usage relationships for data aggregation has been the difficulty of scaling them up to reach comprehensive coverage, in contrast to name-based global taxonomic hierarchies (Bisby 2011). The Linnaean system of nomenclature has some unfortunate design limitations in this regard, in that taxonomic names are not unique identifiers, their meanings may change over time, and the names as a string of characters do not encode their proper usage, i.e., the name “Genus species” does not specify a source defining how to use the name correctly (Remsen 2016, Sterner and Franz 2017). In practice, many people provide taxonomic names in their datasets or publications but not a source specifying a usage. The information needed to map the relationships between names and usages in taxonomic monographs or revisions is typically not presented it in a machine-readable format. New approaches are making progress on these obstacles. Theoretical advances in the representation of taxonomic intelligence have made it increasingly possible to implement efficient querying and reasoning methods on name-usage relationships (Chen et al. 2014, Chawuthai et al. 2016, Franz et al. 2015). Perhaps most importantly, growing efforts to produce name-usage mappings on a medium scale by data providers and taxonomic authorities suggest an all-or-nothing approach is not required. Multiple high-profile biodiversity databases have implemented internal tools for explicitly tracking conflicting or dynamic taxonomic classifications, including eBird using concept relationships from AviBase (Lepage et al. 2014); NatureServe in its Biotics database; iNaturalist using its taxon framework (Loarie 2020); and the UNITE database for fungi (Nilsson et al. 2019). Other ongoing projects incorporating taxonomic intelligence include the Flora of Alaska (Flora of Alaska 2020), the Mammal Diversity Database (Mammal Diversity Database 2020) and PollardBase for butterfly population monitoring (Campbell et al. 2020).

2021 ◽  
Vol 22 (14) ◽  
pp. 7590
Author(s):  
Liza Vinhoven ◽  
Frauke Stanke ◽  
Sylvia Hafkemeyer ◽  
Manuel Manfred Nietert

Different causative therapeutics for CF patients have been developed. There are still no mutation-specific therapeutics for some patients, especially those with rare CFTR mutations. For this purpose, high-throughput screens have been performed which result in various candidate compounds, with mostly unclear modes of action. In order to elucidate the mechanism of action for promising candidate substances and to be able to predict possible synergistic effects of substance combinations, we used a systems biology approach to create a model of the CFTR maturation pathway in cells in a standardized, human- and machine-readable format. It is composed of a core map, manually curated from small-scale experiments in human cells, and a coarse map including interactors identified in large-scale efforts. The manually curated core map includes 170 different molecular entities and 156 reactions from 221 publications. The coarse map encompasses 1384 unique proteins from four publications. The overlap between the two data sources amounts to 46 proteins. The CFTR Lifecycle Map can be used to support the identification of potential targets inside the cell and elucidate the mode of action for candidate substances. It thereby provides a backbone to structure available data as well as a tool to develop hypotheses regarding novel therapeutics.


Author(s):  
Kia Ng

This chapter describes an optical document imaging system to transform paper-based music scores and manuscripts into machine-readable format and a restoration system to touch-up small imperfections (for example broken stave lines and stems), to restore deteriorated master copy for reprinting. The chapter presents a brief background of this field, discusses the main obstacles, and presents the processes involved for printed music scores processing; using a divide-and-conquer approach to sub-segment compound musical symbols (e.g., chords) and inter-connected groups (e.g., beamed quavers) into lower-level graphical primitives (e.g., lines and ellipses) before recognition and reconstruction. This is followed by discussions on the developments of a handwritten manuscripts prototype with a segmentation approach to separate handwritten musical primitives. Issues and approaches for recognition, reconstruction and revalidation using basic music syntax and high-level domain knowledge, and data representation are also presented.


Author(s):  
Roberto Paiano ◽  
Anna Lisa Guido

In this chapter the focus is on business process design as middle point between requirement elicitation and implementation of a Web information system. We face both the problem of the notation to adopt in order to represent in a simple way the business process and the problem of a formal representation, in a machine-readable format, of the design. We adopt Semantic Web technology to represent process and we explain how this technology has been used to reach our goals.


2009 ◽  
pp. 451-468
Author(s):  
Roberto Paiano ◽  
Anna Lisa Guido

In this chapter the focus is on business process design as middle point between requirement elicitation and implementation of a Web information system. We face both the problem of the notation to adopt in order to represent in a simple way the business process and the problem of a formal representation, in a machine-readable format, of the design. We adopt Semantic Web technology to represent process and we explain how this technology has been used to reach our goals.


2020 ◽  
pp. 016555152097203
Author(s):  
Artem Chumachenko ◽  
Boris Kreminskyi ◽  
Iurii Mosenkis ◽  
Alexander Yakimenko

In the present era of information, the problem of effective knowledge retrieval from a collection of scientific documents becomes especially important for continuous scientific progress. The information available in scientific publications traditionally consists of bibliometric metadata and its semantic component such as title, abstract and text. While the former having a machine-readable format usually used for knowledge mapping and pattern recognition, the latter designed for human interpretation and analysis. Only a few studies use full-text analysis, based on carefully selected scientific ontology, to map the actual structure of the scientific knowledge or uncover similarities between documents. Unfortunately, the presence of common (basic) concepts across semantically unrelated documents creates spurious connections between different topics. We revise the known method based on the entropic information-theoretic measure used for selecting basic concepts and propose to analyse the dynamics of Shannon entropy for more rigorous sorting of concepts by their generality.


2018 ◽  
Vol 2 ◽  
pp. e25914
Author(s):  
Holger Dettki ◽  
Peggy Newman ◽  
Sarah Davidson ◽  
Francesca Cagnacci

In recent years, bio-logging data, automatically gathered by sensors deployed on animals, has become one of the fastest growing sources of biodiversity data. This is largely due to the steadily declining mass, size and costs of sensors, continuously opening new opportunities to monitor new species. While previously ‘tracking data’—data from spatially enabled sensors such as GPS sensors—was most prominent, currently almost 70% of all bio-logging data is comprised of non-spatial data as e.g., physiological data. In contrast to the biodiversity data community, where standards to mobilize and exchange data are relatively well established, the bio-logging community is still lacking standards to transport data from sensors into repositories, or to mobilize data in a standardized format from different repositories to enable cooperation between users, shared software tools, data aggregation for meta-analysis, or a consistent format for long-term archiving. To set the stage for a discussion about standards for bio-logging data to be developed or adapted, we present a mind map describing the different pathways of bio-logging data during its life cycle, and the opportunities for standardization within this cycle. As an example we present the use of the Open Geospatial Consortium (OGC) ‘SensorML’ and ‘Observations & Measurements’ standards to transfer bio-logging data from a sensor to a repository and ultimately to a user for subsequent analysis. These standards provide machine-readable methods for describing bio-logging sensors and the measurements they collect, offering a standardized structure that can be customized by the bio-logging community (e.g. with standardized vocabularies) to achieve interoperability.


2021 ◽  
Author(s):  
Yan Hu ◽  
Shujian Sun ◽  
Thomas Rowlands ◽  
Tim Beck ◽  
Joram Matthias Posma

Motivation: The availability of improved natural language processing (NLP) algorithms and models enable researchers to analyse larger corpora using open source tools. Text mining of biomedical literature is one area for which NLP has been used in recent years with large untapped potential. However, in order to generate corpora that can be analyzed using machine learning NLP algorithms, these need to be standardized. Summarizing data from literature to be stored into databases typically requires manual curation, especially for extracting data from result tables. Results: We present here an automated pipeline that cleans HTML files from biomedical literature. The output is a single JSON file that contains the text for each section, table data in machine-readable format and lists of phenotypes and abbreviations found in the article. We analyzed a total of 2,441 Open Access articles from PubMed Central, from both Genome-Wide and Metabolome-Wide Association Studies, and developed a model to standardize the section headers based on the Information Artifact Ontology. Extraction of table data was developed on PubMed articles and fine-tuned using the equivalent publisher versions. Availability: The Auto-CORPus package is freely available with detailed instructions from Github at https://github.com/jmp111/AutoCORPus/.


2021 ◽  
pp. 79-90
Author(s):  
Christian Zinke-Wehlmann ◽  
Amit Kirschenbaum ◽  
Raul Palma ◽  
Soumya Brahma ◽  
Karel Charvát ◽  
...  

AbstractData is the basis for creating information and knowledge. Having data in a structured and machine-readable format facilitates the processing and analysis of the data. Moreover, metadata—data about the data, can help discovering data based on features as, e.g., by whom they were created, when, or for which purpose. These associated features make the data more interpretable and assist in turning it into useful information. This chapter briefly introduces the concepts of metadata and Linked Data—highly structured and interlinked data, their standards and their usages, with some elaboration on the role of Linked Data in bioeconomy.


Sign in / Sign up

Export Citation Format

Share Document