scholarly journals How about Habitats and Darwin Core?

Author(s):  
Remy Jomier ◽  
Remy Poncet ◽  
Noemie Michez

As part of the Biodiversity Information System on Nature and Landscapes (SINP), the French National Museum of Natural History was appointed to develop biodiversity data exchange standards, with the goal of sharing French marine and terrestrial data nationally, meeting domestic and European requirements, e.g., the Infrastructure for spatial information in Europe Directive (INSPIRE Directive, European Commission 2007). Data standards are now recognised as useful to improve and share biodiversity knowledge (e.g., species distribution) and play a key role in data valorisation (e.g., vulnerability assessment, conservation policy). For example, in order to fulfill report obligations within the Fauna and Flora Habitats Directive (European Commission 1992), and the Marine Strategy Framework Directive (European Commission 2008), information about taxa and habitat occurrences are required periodically, involving data exchange and compilation at a national scale. National and international data exchange standards are focused on species, and only a few solutions exist when there is a need to deal with habitat data. Darwin Core has been built to fit with species data exchange needs and only contains one habitat attribute that allows for a bit of leeway to have such an information transfer, but is deemed to be one of the least standardized fields. However, Darwin Core does not allow for a transit of only habitat data, as the scientific name of the taxon is mandatory. The SINP standard for habitats was developed by a dedicated working group, representative of biodiversity European Commission 2008 stakeholders in France. This standard focuses on core attributes that characterize habitat observation and monitoring. Interoperability remains to be achieved with the Darwin Core standard, or something similar on a world scale (e.g., Humboldt Core), as habitat data are regularly gathered irrespective of whether taxon occurrences are associated with it. The results of the French initiative proved useful to compile and share data nationally, bringing together data providers that otherwise would have been excluded. However, at a global scale, it faces some challenges that still need to be fully addressed, interoperability being the main one. Regardless of the problems that remain to be solved, some lessons can be learnt from this effort. With the ultimate goal of making biodiversity data readily available, these lessons should be kept in mind for future initiatives. The presentation deals with how this work was undertaken and how the required elements could be integrated into a French national standard to allow for comprehensive habitat data reporting. It will show hypothesis as to what could be added to the Darwin Core to allow for a better understanding of habitats with at least one taxon attached (or not) to them.

Author(s):  
Jennifer Hammock ◽  
Katja Schulz

The Encyclopedia of Life currently hosts ~8M attribute records for ~400k taxa (March 2019, not including geographic categories, Fig. 1). Our aggregation priorities include Essential Biodiversity Variables (Kissling et al. 2018) and other global scale research data priorities. Our primary strategy remains partnership with specialist open data aggregators; we are also developing tools for the deployment of evolutionarily conserved attribute values that scale quickly for global taxonomic coverage, for instance: tissue mineralization type (aragonite, calcite, silica...); trophic guild in certain clades; sensory modalities. To support the aggregation and integration of trait information, data sets should be well structured, properly annotated and free of licensing or contractual restrictions so that they are ‘findable, accessible, interoperable, and reusable’ for both humans and machines (FAIR principles; Wilkinson et al. 2016). To this end, we are improving the documentation of protocols for the transformation, curation, and analysis of EOL data, and associated scripts and software are made available to ensure reproducibility. Proper acknowledgement of contributors and tracking of credit through derived data products promote both open data sharing and the use of aggregated resources. By exposing unique identifiers for data products, people, and institutions, data providers and aggregators can stimulate the development of automated solutions for the creation of contribution metrics. Since different aspects of provenance will be significant depending on the intended data use, better standardization of contributor roles (e.g., author, compiler, publisher, funder) is needed, as well as more detailed attribution guidance for data users. Global scale biodiversity data resources should resolve into a graph, linking taxa, specimens, occurrences, attributes, localities, and ecological interactions, as well as human agents, publications and institutions. Two key data categories for ensuring rich connectivity in the graph will be taxonomic and trait data. This graph can be supported by existing data hubs, if they share identifiers and/or create mappings between them, using standards and sharing practices developed by the biodiversity data community. Versioned archives of the combined graph could be published at intervals to appropriate open data repositories, and open source tools and training provided for researchers to access the combined graph of biodiversity knowledge from all sources. To achieve this, good communication among data hubs will be needed. We will need to share information about preferred vocabularies and identifier management practices, and collaborate on identifier mappings.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 708
Author(s):  
Wenbo Liu ◽  
Fei Yan ◽  
Jiyong Zhang ◽  
Tao Deng

The quality of detected lane lines has a great influence on the driving decisions of unmanned vehicles. However, during the process of unmanned vehicle driving, the changes in the driving scene cause much trouble for lane detection algorithms. The unclear and occluded lane lines cannot be clearly detected by most existing lane detection models in many complex driving scenes, such as crowded scene, poor light condition, etc. In view of this, we propose a robust lane detection model using vertical spatial features and contextual driving information in complex driving scenes. The more effective use of contextual information and vertical spatial features enables the proposed model more robust detect unclear and occluded lane lines by two designed blocks: feature merging block and information exchange block. The feature merging block can provide increased contextual information to pass to the subsequent network, which enables the network to learn more feature details to help detect unclear lane lines. The information exchange block is a novel block that combines the advantages of spatial convolution and dilated convolution to enhance the process of information transfer between pixels. The addition of spatial information allows the network to better detect occluded lane lines. Experimental results show that our proposed model can detect lane lines more robustly and precisely than state-of-the-art models in a variety of complex driving scenarios.


2021 ◽  
Vol 92 (3) ◽  
pp. 1854-1875 ◽  
Author(s):  
Klaus Stammler ◽  
Monika Bischoff ◽  
Andrea Brüstle ◽  
Lars Ceranna ◽  
Stefanie Donner ◽  
...  

Abstract Germany has a long history in seismic instrumentation. The installation of the first station sites was initiated in those regions with seismic activity. Later on, with an increasing need for seismic hazard assessment, seismological state services were established over the course of several decades, using heterogeneous technology. In parallel, scientific research and international cooperation projects triggered the establishment of institutional and nationwide networks and arrays also focusing on topics other than monitoring local or regional areas, such as recording global seismicity or verification of the compliance with the Comprehensive Nuclear-Test-Ban Treaty. At each of the observatories and data centers, an extensive analysis of the recordings is performed providing high-level data products, for example, earthquake catalogs, as a base for supporting state or federal authorities, to inform the public on topics related to seismology, and for information transfer to international institutions. These data products are usually also accessible at websites of the responsible organizations. The establishment of the European Integrated Data Archive (EIDA) led to a consolidation of existing waveform data exchange mechanisms and their definition as standards in Europe, along with a harmonization of the applied data quality assurance procedures. In Germany, the German Regional Seismic Network as national backbone network and the state networks of Saxony, Saxony-Anhalt, Thuringia, and Bavaria spearheaded the national contributions to EIDA. The benefits of EIDA are attracting additional state and university networks, which are about to join the EIDA community now.


Author(s):  
Lyubomir Penev ◽  
Teodor Georgiev ◽  
Viktor Senderov ◽  
Mariya Dimitrova ◽  
Pavel Stoev

As one of the first advocates of open access and open data in the field of biodiversity publishiing, Pensoft has adopted a multiple data publishing model, resulting in the ARPHA-BioDiv toolbox (Penev et al. 2017). ARPHA-BioDiv consists of several data publishing workflows and tools described in the Strategies and Guidelines for Publishing of Biodiversity Data and elsewhere: Data underlying research results are deposited in an external repository and/or published as supplementary file(s) to the article and then linked/cited in the article text; supplementary files are published under their own DOIs and bear their own citation details. Data deposited in trusted repositories and/or supplementary files and described in data papers; data papers may be submitted in text format or converted into manuscripts from Ecological Metadata Language (EML) metadata. Integrated narrative and data publishing realised by the Biodiversity Data Journal, where structured data are imported into the article text from tables or via web services and downloaded/distributed from the published article. Data published in structured, semanticaly enriched, full-text XMLs, so that several data elements can thereafter easily be harvested by machines. Linked Open Data (LOD) extracted from literature, converted into interoperable RDF triples in accordance with the OpenBiodiv-O ontology (Senderov et al. 2018) and stored in the OpenBiodiv Biodiversity Knowledge Graph. Data underlying research results are deposited in an external repository and/or published as supplementary file(s) to the article and then linked/cited in the article text; supplementary files are published under their own DOIs and bear their own citation details. Data deposited in trusted repositories and/or supplementary files and described in data papers; data papers may be submitted in text format or converted into manuscripts from Ecological Metadata Language (EML) metadata. Integrated narrative and data publishing realised by the Biodiversity Data Journal, where structured data are imported into the article text from tables or via web services and downloaded/distributed from the published article. Data published in structured, semanticaly enriched, full-text XMLs, so that several data elements can thereafter easily be harvested by machines. Linked Open Data (LOD) extracted from literature, converted into interoperable RDF triples in accordance with the OpenBiodiv-O ontology (Senderov et al. 2018) and stored in the OpenBiodiv Biodiversity Knowledge Graph. The above mentioned approaches are supported by a whole ecosystem of additional workflows and tools, for example: (1) pre-publication data auditing, involving both human and machine data quality checks (workflow 2); (2) web-service integration with data repositories and data centres, such as Global Biodiversity Information Facility (GBIF), Barcode of Life Data Systems (BOLD), Integrated Digitized Biocollections (iDigBio), Data Observation Network for Earth (DataONE), Long Term Ecological Research (LTER), PlutoF, Dryad, and others (workflows 1,2); (3) semantic markup of the article texts in the TaxPub format facilitating further extraction, distribution and re-use of sub-article elements and data (workflows 3,4); (4) server-to-server import of specimen data from GBIF, BOLD, iDigBio and PlutoR into manuscript text (workflow 3); (5) automated conversion of EML metadata into data paper manuscripts (workflow 2); (6) export of Darwin Core Archive and automated deposition in GBIF (workflow 3); (7) submission of individual images and supplementary data under own DOIs to the Biodiversity Literature Repository, BLR (workflows 1-3); (8) conversion of key data elements from TaxPub articles and taxonomic treatments extracted by Plazi into RDF handled by OpenBiodiv (workflow 5). These approaches represent different aspects of the prospective scholarly publishing of biodiversity data, which in a combination with text and data mining (TDM) technologies for legacy literature (PDF) developed by Plazi, lay the ground of an entire data publishing ecosystem for biodiversity, supplying FAIR (Findable, Accessible, Interoperable and Reusable data to several interoperable overarching infrastructures, such as GBIF, BLR, Plazi TreatmentBank, OpenBiodiv and various end users.


2017 ◽  
Vol 15 (2) ◽  
pp. 301-320
Author(s):  
Maria Kaczorowska

The development of information technologies offers new possibilities of use of information collected in public registers, such as land registers and cadastres, which play a significant role in establishing the infrastructure for spatial information. Efficient use of spatial information systems with the purpose of a sustainable land management shall be based on en suring the interconnection of different information resources, data exchange, as well as a broad access to data. The role of land registration systems in the context of technological advancement was the subject of the Common Vision Conference 2016. Migration to a Smart World, held on 5–7 June 2016 in Amsterdam. The conference was organized by Europe’s five leading mapping, cadastre and land registry associations, cooperating within a “Common Vision” agreement: EuroGeographics, Permanent Committee on Cadastre, European Land Registries Association, European Land Information Service and Council of European Geodetic Surveyors. The discussion during the conference focused on topics regarding the idea of smart cities, marine cadastre, interoperability of spatial data, as well as the impact of land registers and cadastres on creating the infrastructure for spatial information and developing e-government, at both national and European levels. The paper aims to present an overview of issues covered by the conference and also to highlight some important problems arising from implementing advanced technology solutions in the field of land registration.


Author(s):  
Lauren Weatherdon

Ensuring that we have the data and information necessary to make informed decisions is a core requirement in an era of increasing complexity and anthropogenic impact. With cumulative challenges such as the decline in biodiversity and accelerating climate change, the need for spatially-explicit and methodologically-consistent data that can be compiled to produce useful and reliable indicators of biological change and ecosystem health is growing. Technological advances—including satellite imagery—are beginning to make this a reality, yet uptake of biodiversity information standards and scaling of data to ensure its applicability at multiple levels of decision-making are still in progress. The complementary Essential Biodiversity Variables (EBVs) and Essential Ocean Variables (EOVs), combined with Darwin Core and other data and metadata standards, provide the underpinnings necessary to produce data that can inform indicators. However, perhaps the largest challenge in developing global, biological change indicators is achieving consistent and holistic coverage over time, with recognition of biodiversity data as global assets that are critical to tracking progress toward the UN Sustainable Development Goals and Targets set by the international community (see Jensen and Campbell (2019) for discussion). Through this talk, I will describe some of the efforts towards producing and collating effective biodiversity indicators, such as those based on authoritative datasets like the World Database on Protected Areas (https://www.protectedplanet.net/), and work achieved through the Biodiversity Indicators Partnership (https://www.bipindicators.net/). I will also highlight some of the characteristics of effective indicators, and global biodiversity reporting and communication needs as we approach 2020 and beyond.


2019 ◽  
Vol 2 ◽  
Author(s):  
Lyubomir Penev

"Data ownership" is actually an oxymoron, because there could not be a copyright (ownership) on facts or ideas, hence no data onwership rights and law exist. The term refers to various kinds of data protection instruments: Intellectual Property Rights (IPR) (mostly copyright) asserted to indicate some kind of data ownership, confidentiality clauses/rules, database right protection (in the European Union only), or personal data protection (GDPR) (Scassa 2018). Data protection is often realised via different mechanisms of "data hoarding", that is witholding access to data for various reasons (Sieber 1989). Data hoarding, however, does not put the data into someone's ownership. Nonetheless, the access to and the re-use of data, and biodiversuty data in particular, is hampered by technical, economic, sociological, legal and other factors, although there should be no formal legal provisions related to copyright that may prevent anyone who needs to use them (Egloff et al. 2014, Egloff et al. 2017, see also the Bouchout Declaration). One of the best ways to provide access to data is to publish these so that the data creators and holders are credited for their efforts. As one of the pioneers in biodiversity data publishing, Pensoft has adopted a multiple-approach data publishing model, resulting in the ARPHA-BioDiv toolbox and in extensive Strategies and Guidelines for Publishing of Biodiversity Data (Penev et al. 2017a, Penev et al. 2017b). ARPHA-BioDiv consists of several data publishing workflows: Deposition of underlying data in an external repository and/or its publication as supplementary file(s) to the related article which are then linked and/or cited in-tex. Supplementary files are published under their own DOIs to increase citability). Description of data in data papers after they have been deposited in trusted repositories and/or as supplementary files; the systme allows for data papers to be submitted both as plain text or converted into manuscripts from Ecological Metadata Language (EML) metadata. Import of structured data into the article text from tables or via web services and their susequent download/distribution from the published article as part of the integrated narrative and data publishing workflow realised by the Biodiversity Data Journal. Publication of data in structured, semanticaly enriched, full-text XMLs where data elements are machine-readable and easy-to-harvest. Extraction of Linked Open Data (LOD) from literature, which is then converted into interoperable RDF triples (in accordance with the OpenBiodiv-O ontology) (Senderov et al. 2018) and stored in the OpenBiodiv Biodiversity Knowledge Graph Deposition of underlying data in an external repository and/or its publication as supplementary file(s) to the related article which are then linked and/or cited in-tex. Supplementary files are published under their own DOIs to increase citability). Description of data in data papers after they have been deposited in trusted repositories and/or as supplementary files; the systme allows for data papers to be submitted both as plain text or converted into manuscripts from Ecological Metadata Language (EML) metadata. Import of structured data into the article text from tables or via web services and their susequent download/distribution from the published article as part of the integrated narrative and data publishing workflow realised by the Biodiversity Data Journal. Publication of data in structured, semanticaly enriched, full-text XMLs where data elements are machine-readable and easy-to-harvest. Extraction of Linked Open Data (LOD) from literature, which is then converted into interoperable RDF triples (in accordance with the OpenBiodiv-O ontology) (Senderov et al. 2018) and stored in the OpenBiodiv Biodiversity Knowledge Graph In combination with text and data mining (TDM) technologies for legacy literature (PDF) developed by Plazi, these approaches show different angles to the future of biodiversity data publishing and, lay the foundations of an entire data publishing ecosystem in the field, while also supplying FAIR (Findable, Accessible, Interoperable and Reusable) data to several interoperable overarching infrastructures, such as Global Biodiversity Information Facility (GBIF), Biodiversity Literature Repository (BLR), Plazi TreatmentBank, OpenBiodiv, as well as to various end users.


Land ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 50
Author(s):  
Hae Ok Choi

In this study, we attempted to quantitatively determine the characteristics of keyword networks in the cadastre field using major contents of research drawn from international academic papers. Furthermore, we investigated the macroscopic evolution of cadastral research and examined its keyword network in detail (at a global scale) using semantic analysis. The analysis was carried out based on cadastral-research-related publications extracted from “Scopus” for 1987 to 2019. It was found that cadastre research has closely followed the recent trend of a growing interest in research on geospatial information and standardization. The results showed the advancement of technology innovation within the field of cadastres, as highlighted in the combination of relevant keywords (mostly from those related to spatial information technology and participation of civilians). These new issues are expected to drive the evolution of the academic scope in the future through synthesis with other fields for smart land management policy.


Author(s):  
Josephine Wapakabulo Thomas

The focus of this research is to identify the factors and barriers critical to the adoption of data-exchange standards. Chapter Five identified these factors from an innovationcentric viewpoint, and the purpose of this chapter is to establish the factors that are relevant from an adopter-centric approach. This approach focuses on the adoption of an innovation, in this case standards, within an organization. The chosen organization for this research is the UK Ministry of Defence (MoD). However, in order to limit some of bias of adopter-centric studies identified by West (1999), this chapter not only focuses on the adoption of an ISO data-exchange standard within the MoD, but also looks at the adoption of a regional and UK national defence standard. It is hoped that by comparing the adoptionof an ISO standard with a regional standard and national standard, a better distinction can be made between the factors that are unique to the adoption of an ISO data-exchange standard, and those that are common to the adoption of any standard or innovation within the MoD.


2019 ◽  
Vol 11 (24) ◽  
pp. 2951 ◽  
Author(s):  
Soner Uereyen ◽  
Claudia Kuenzer

Regardless of political boundaries, river basins are a functional unit of the Earth’s land surface and provide an abundance of resources for the environment and humans. They supply livelihoods supported by the typical characteristics of large river basins, such as the provision of freshwater, irrigation water, and transport opportunities. At the same time, they are impacted i.e., by human-induced environmental changes, boundary conflicts, and upstream–downstream inequalities. In the framework of water resource management, monitoring of river basins is therefore of high importance, in particular for researchers, stake-holders and decision-makers. However, land surface and surface water properties of many major river basins remain largely unmonitored at basin scale. Several inventories exist, yet consistent spatial databases describing the status of major river basins at global scale are lacking. Here, Earth observation (EO) is a potential source of spatial information providing large-scale data on the status of land surface properties. This review provides a comprehensive overview of existing research articles analyzing major river basins primarily using EO. Furthermore, this review proposes to exploit EO data together with relevant open global-scale geodata to establish a database and to enable consistent spatial analyses and evaluate past and current states of major river basins.


Sign in / Sign up

Export Citation Format

Share Document