scholarly journals The Living Atlases Community of Practice

Author(s):  
Marie-Elise Lecoq ◽  
Anne-Sophie Archambeau ◽  
Rui Figueira ◽  
David Martin ◽  
Sophie Pamerlon ◽  
...  

The power and configurability of the the Atlas of Living Australia tools have enabled more and more institutions and participants of the Global Biodiversity Information Facility adapt and install biodiversity platforms. For six years, we have demonstrated that the community around this platform was needed and ready for its adoption. During the symposium organized for the SPNHC+TDWG 2018, we started a discussion that has led us to the creation of a more structured and sustainable community of practice. We want to create a community that follows the structure of open-source technical projects such as Linux or Apache foundation. After the GBIF Governing Board (GB25), the Kilkenny accord was agreed among 8 country or institution partners and early adopters of ALA platform to outline the scope of the new Living Atlases community. Thanks to this accord, we have begun to set up a new structure based on the Community of Practice (CoP) model. In summary, the governance will be held by a Management committee and a Technical advisory committee. Adding to these, the Living Atlases community will have two coordinators with technical and administrative duties. This presentation will briefly summarise the community history leading up to the agreement of the Kilkenny accord and provide information and context of the key points contained. Then, we will present and launch the new Living Atlases Community of Practice . Through this presentation, we aim to collect lessons learned and good practices from other CoP in topics like governance, communications, sustainability, among others to incorporate them in the consolidation process of the Living Atlases community.

Author(s):  
Mélianie Raymond ◽  
Andrew Rodrigues ◽  
Laura Anne Russell

Biodiversity Information for Development, or BID, is a programme funded by the European Union and led by the Global Biodiversity Information Facility (GBIF), aiming to increase the amount of biodiversity information available for use in scientific research and policymaking. In its first phase, BID provided funding to 61 projects in the nations of sub-Saharan Africa, the Caribbean and the Pacific between 2015-2019, with a strong focus on developing capacity to mobilize, manage and use biodiversity data within the project teams and their institutions, and through the establishment and strengthening of national nodes. The capacity development approach centred on establishing a community of practice to bring in the expertise in the broader GBIF network to support the project teams in meeting their goals. This involved designing curricula for two workshops in the areas of data mobilization and data use for decision making; developing activities and materials to strengthen a base of mentors and trainers; establishing technical helpdesk support; and matchmaking to provide mentoring support to the funded projects. The community of practice, through mentoring and reuse of the workshop materials, has been expanded to support the capacity development needs in other programmes, reaching other regions, including Asia, South-East Europe and Eurasia. During this presentation, we will review the main findings of the BID impact study and guiding examples from within the BID programme to identify the key successes and lessons learned relating to capacity development. As this approach has wider application to the biodiversity community, we invite discussion how we can build on the experience through the BID programme to further develop our community of practice, narrowing knowledge gaps between various groups of biodiversity professionals.


2018 ◽  
Vol 2 ◽  
pp. e25738 ◽  
Author(s):  
Arturo Ariño ◽  
Daniel Noesgaard ◽  
Angel Hjarding ◽  
Dmitry Schigel

Standards set up by Biodiversity Information Standards-Taxonomic Databases Working Group (TDWG), initially developed as a way to share taxonomical data, greatly facilitated the establishment of the Global Biodiversity Information Facility (GBIF) as the largest index to digitally-accessible primary biodiversity information records (PBR) held by many institutions around the world. The level of detail and coverage of the body of standards that later became the Darwin Core terms enabled increasingly precise retrieval of relevant records useful for increased digitally-accessible knowledge (DAK) which, in turn, may have helped to solve ecologically-relevant questions. After more than a decade of data accrual and release, an increasing number of papers and reports are citing GBIF either as a source of data or as a pointer to the original datasets. GBIF has curated a list of over 5,000 citations that were examined for contents, and to which tags were applied describing such contents as additional keywords. The list now provides a window on what users want to accomplish using such DAK. We performed a preliminary word frequency analysis of this literature, starting at titles, which refers to GBIF as a resource. Through a standardization and mapping of terms, we examined how the facility-enabled data seem to have been used by scientists and other practitioners through time: what concepts/issues are pervasive, which taxon groups are mostly addressed, and whether data concentrate around specific geographical or biogeographical regions. We hoped to cast light on which types of ecological problems the community believes are amenable to study through the judicious use of this data commons and found that, indeed, a few themes were distinctly more frequently mentioned than others. Among those, generally-perceived issues such as climate change and its effect on biodiversity at global and regional scales seemed prevalent. The taxonomic groups were also unevenly mentioned, with birds and plants being the most frequently named. However, the entire list of potential subjects that might have used GBIF-enabled data is now quite wide, showing that the availability of well-structured data has spawned a widening spectrum of possible use cases. Among them, some enjoy early and continuous presence (e.g. species, biodiversity, climate) while others have started to show up only later, once a critical mass of data seemed to have been attained (e.g. ecosystems, suitability, endemism). Biodiversity information in the form of standards-compliant DAK may thus already have become a commodity enabling insight into an increasingly more complex and diverse body of science. Paraphrasing Tennyson, more things were wrought by data than TDWG dreamt of.


Author(s):  
Laurence Bénichou ◽  
Isabelle Gerard ◽  
Chloé Chester ◽  
Donat Agosti

The European Journal of Taxonomy (EJT) was initiated by a consortium of European natural history publishers to take advantage of the shift from paper to electronic-only publishing (Benichou et al. 2011). Whilst originally publishing in PDF format has been considered the state of the art, it became recently obvious that complementary dissemination channels help to disseminate taxonomic data - one of the pillars of Natural History institutions research - more widely and efficiently (Côtez et al. 2018). The adoption of semantic markup and assignment of persistent identifiers for content allow more comprehensive citations of the article, including elements therein, such as images, taxonomic treatments, and materials citation. It also allows more in-depth analyses and visualization of the contribution of collections, authors, or specimens to taxonomic output and third parties, such as the Global Biodiversity Information Facility, for reuse of the data or building the catalogue of life. In this presentation, EJT will be used to outline the nature of natural history publishers and their technical set up. This is followed by a description of the post-publishing workflow using the Plazi workflow and dissemination via the Biodiversity Literature Repository (BLR) and TreatmentBank. It outlines switching the publishing workflow to an increased use of extended markup language (XML) and visualization of the output and concludes by publishing guidelines that enable more efficient text and data mining of the content of taxonomic publications.


2021 ◽  
Author(s):  
Spencer Shirk ◽  
Danielle Kerr ◽  
Crystal Saraceni ◽  
Garret Hand ◽  
Michael Terrenzi ◽  
...  

ABSTRACT Upon the U.S. FDA approval in early November for a monoclonal antibody proven to potentially mitigate adverse outcomes from coronavirus disease 2019 (COVID-19) infections, our small overseas community hospital U.S. Naval Hospital Rota, Spain (USNH Rota) requested and received a limited number of doses. Concurrently, our host nation, which previously had reported the highest number of daily deaths from COVID-19, was deep within a second wave of infections, increasing hospital admissions, near intensive care unit capacity, and deaths. As USNH Rota was not normally equipped for the complex infusion center required to effectively deliver the monoclonal antibody, we coordinated a multi-directorate and multidisciplinary effort in order to set up an infusion room that could be dedicated to help with our fight against COVID. Identifying a physician team lead, with subject matter experts from nursing, pharmacy, facilities, and enlisted corpsmen, our team carefully ensured that all requisite steps were set up in advance in order to be able to identify the appropriate patients proactively and treat them safely with the infusion that has been clinically proven to decrease hospital admissions and mortality. Additional benefits included the establishment of an additional negative pressure room near our emergency room for both COVID-19 patients and, when needed, the monoclonal antibody infusion. In mid-January, a COVID-19-positive patient meeting the clinical criteria for monoclonal antibody infusion was safely administered this potentially life-saving medication, a first for small overseas hospitals. Here, we describe the preparation, challenges, obstacles, lessons learned, and successful outcomes toward effectively using the monoclonal antibody overseas.


Author(s):  
Katharine Barker ◽  
Jonas Astrin ◽  
Gabriele Droege ◽  
Jonathan Coddington ◽  
Ole Seberg

Most successful research programs depend on easily accessible and standardized research infrastructures. Until recently, access to tissue or DNA samples with standardized metadata and of a sufficiently high quality, has been a major bottleneck for genomic research. The Global Geonome Biodiversity Network (GGBN) fills this critical gap by offering standardized, legal access to samples. Presently, GGBN’s core activity is enabling access to searchable DNA and tissue collections across natural history museums and botanic gardens. Activities are gradually being expanded to encompass all kinds of biodiversity biobanks such as culture collections, zoological gardens, aquaria, arboreta, and environmental biobanks. Broadly speaking, these collections all provide long-term storage and standardized public access to samples useful for molecular research. GGBN facilitates sample search and discovery for its distributed member collections through a single entry point. It stores standardized information on mostly geo-referenced, vouchered samples, their physical location, availability, quality, and the necessary legal information on over 50,000 species of Earth’s biodiversity, from unicellular to multicellular organisms. The GGBN Data Portal and the GGBN Data Standard are complementary to existing infrastructures such as the Global Biodiversity Information Facility (GBIF) and International Nucleotide Sequence Database (INSDC). Today, many well-known open-source collection management databases such as Arctos, Specify, and Symbiota, are implementing the GGBN data standard. GGBN continues to increase its collections strategically, based on the needs of the research community, adding over 1.3 million online records in 2018 alone, and today two million sample data are available through GGBN. Together with Consortium of European Taxonomic Facilities (CETAF), Society for the Preservation of Natural History Collections (SPNHC), Biodiversity Information Standards (TDWG), and Synthesis of Systematic Resources (SYNTHESYS+), GGBN provides best practices for biorepositories on meeting the requirements of the Nagoya Protocol on Access and Benefit Sharing (ABS). By collaboration with the Biodiversity Heritage Library (BHL), GGBN is exploring options for tagging publications that reference GGBN collections and associated specimens, made searchable through GGBN’s document library. Through its collaborative efforts, standards, and best practices GGBN aims at facilitating trust and transparency in the use of genetic resources.


Author(s):  
Erica Krimmel ◽  
Austin Mast ◽  
Deborah Paul ◽  
Robert Bruhn ◽  
Nelson Rios ◽  
...  

Genomic evidence suggests that the causative virus of COVID-19 (SARS-CoV-2) was introduced to humans from horseshoe bats (family Rhinolophidae) (Andersen et al. 2020) and that species in this family as well as in the closely related Hipposideridae and Rhinonycteridae families are reservoirs of several SARS-like coronaviruses (Gouilh et al. 2011). Specimens collected over the past 400 years and curated by natural history collections around the world provide an essential reference as we work to understand the distributions, life histories, and evolutionary relationships of these bats and their viruses. While the importance of biodiversity specimens to emerging infectious disease research is clear, empowering disease researchers with specimen data is a relatively new goal for the collections community (DiEuliis et al. 2016). Recognizing this, a team from Florida State University is collaborating with partners at GEOLocate, Bionomia, University of Florida, the American Museum of Natural History, and Arizona State University to produce a deduplicated, georeferenced, vetted, and versioned data product of the world's specimens of horseshoe bats and relatives for researchers studying COVID-19. The project will serve as a model for future rapid data product deployments about biodiversity specimens. The project underscores the value of biodiversity data aggregators iDigBio and the Global Biodiversity Information Facility (GBIF), which are sources for 58,617 and 79,862 records, respectively, as of July 2020, of horseshoe bat and relative specimens held by over one hundred natural history collections. Although much of the specimen-based biodiversity data served by iDigBio and GBIF is high quality, it can be considered raw data and therefore often requires additional wrangling, standardizing, and enhancement to be fit for specific applications. The project will create efficiencies for the coronavirus research community by producing an enhanced, research-ready data product, which will be versioned and published through Zenodo, an open-access repository (see doi.org/10.5281/zenodo.3974999). In this talk, we highlight lessons learned from the initial phases of the project, including deduplicating specimen records, standardizing country information, and enhancing taxonomic information. We also report on our progress to date, related to enhancing information about agents (e.g., collectors or determiners) associated with these specimens, and to georeferencing specimen localities. We seek also to explore how much we can use the added agent information (i.e., ORCID iDs and Wikidata Q identifiers) to inform our georeferencing efforts and to support crediting those collecting and doing identifications. The project will georeference approximately one third of our specimen records, based on those lacking geospatial coordinates but containing textual locality descriptions. We furthermore provide an overview of our holistic approach to enhancing specimen records, which we hope will maximize the value of the bat specimens at the center of what has been recently termed the "extended specimen network" (Lendemer et al. 2020). The centrality of the physical specimen in the network reinforces the importance of archived materials for reproducible research. Recognizing this, we view the collections providing data to iDigBio and GBIF as essential partners, as we expect that they will be responsible for the long-term management of enhanced data associated with the physical specimens they curate. We hope that this project can provide a model for better facilitating the reintegration of enhanced data back into local specimen data management systems.


2020 ◽  
Vol 245 ◽  
pp. 07003
Author(s):  
Christoph Beyer ◽  
Thomas Finnern ◽  
Martin Flemming ◽  
Andreas Gellrich ◽  
Thomas Hartmann ◽  
...  

Within WLCG, the DESY site in Hamburg is one of the largest Tier-2 sites with about 18500 CPU cores for Grid workloads. Additionally, about 8000 CPU cores are available for interactive user analyses in the National Analysis Factory [NAF]. After migrating these two batch systems onto a common HTCondor based set-up during the previous four years, we recapitulate the lessons learned during the transition especially since both use cases differ in their workloads. For Grid jobs start-up latencies are negligible and the primary focus is on an optimal utilization of the resources. Complementary, users of the NAF expect a high responsiveness of the batch system as well as the storage for interactive analyses. In this document, we will also give an outlook to future developments and concepts for the DESY high-throughput computing. In the ongoing evolution of the HTC batch system, we are exploring how to integrate anonymous jobs with the batch system as back-end for Function-as-a-Service workflows as well as an option for dynamic expansions to remote computing resources.


Sign in / Sign up

Export Citation Format

Share Document