scholarly journals An inventory of biodiversity data sources for conservation monitoring

PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0242923
Author(s):  
P. J. Stephenson ◽  
Carrie Stengel

Many conservation managers, policy makers, businesses and local communities cannot access the biodiversity data they need for informed decision-making on natural resource management. A handful of databases are used to monitor indicators against global biodiversity goals but there is no openly available consolidated list of global data sets to help managers, especially those in high-biodiversity countries. We therefore conducted an inventory of global databases of potential use in monitoring biodiversity states, pressures and conservation responses at multiple levels. We uncovered 145 global data sources, as well as a selection of global data reports, links to which we will make available on an open-access website. We describe trends in data availability and actions needed to improve data sharing. If the conservation and science community made a greater effort to publicise data sources, and make the data openly and freely available for the people who most need it, we might be able to mainstream biodiversity data into decision-making and help stop biodiversity loss.

2021 ◽  
pp. 527-553
Author(s):  
Agnes Zolyomi

AbstractPolicy-makers define our lives to a great extent, and are therefore the people everybody wants to talk to. They receive hundreds of messages in various forms day-by-day with the aim of making them decide for or against something. They are in an especially difficult situation as regards the so-called “wicked” or “diffuse” problems such as climate change and biodiversity loss (Millner and Olivier, 2015; Sharman and Mlambo, 2012; Zaccai and Adams, 2012). These problems are limitedly tackled at the policy level despite their major socio-economic and environmental implications, which is often explained by their complexity with a sense of remoteness of effects (Cardinale et al., 2012; WWF, 2018). Communicating advocacy or scientific messages of biodiversity is therefore both a challenge and an under-researched topic (Bekessy et al., 2018; Posner et al., 2016; Primmer et al., 2015; Wright et al., 2017), where both social and natural sciences and both scientists and practitioners are needed to contribute (Ainscough et al., 2019). In order to be successful in delivering messages, communication not only needs to be self-explanatory and easy to consume but novel as well. It additionally helps if the message arrives in a more extraordinary format to draw even more attention. Based on experiences drawn from a conservation and advocacy NGO’s work, this chapter will divulge various socio-economic theories about creative methods, communication, and influencing decision-makers through a campaign fighting for the preservation of key nature legislation. It will be demonstrated how different EU policy-makers, including representatives of the European Commission and Members of the European Parliament, the general public, and other stakeholders, were addressed with various messages and tools (e.g., short films, social media campaigns, fact sheets, involvement of champions). In addition to other key factors such as public support, knowledge of the target audience and political context, the probable impacts and limitations of these messages will also be elaborated. The relevance to the integration and employment of better socio-economic theories into improving communication is straightforward. It is crucial to tailor-make future advocacy work of “wicked problems” such as biodiversity loss and climate change, since these are not usually backed up by major lobby forces and are, therefore, financed inadequately compared to their significance. Understanding the way in which policy-makers pick up or omit certain messages, as well as what framing, methods and channels are the most effective in delivering them to the policy-makers, is pivotal for a more sustainable future.


Author(s):  
◽  
S. Saran ◽  
K. V. Ramana

<p><strong>Abstract.</strong> Developing countries have to be very cautious in utilizing the land as they affect the food security, cause damage to environment and an ecological imbalance might be created in the process of establishing industries to raise the standard of living of the people from poverty. India, as a developing nation with sufficient amount of arable land at present is producing surplus food which is sufficient for all the population, in the recent decades loosing productive agricultural land without proper scientific solution for industries. This is a major concern because it causes not only food scarcity but dependency on the other nations even though we have lot of industries. We need to maintain a balance between Agriculture and manufacturing sectors to have smooth run of the country’s economy. The purpose of the study is to assess the land use changes in the areas for recent years which have potential for industrial establishment through land suitability analysis (LSA) to emphasize both agriculture and industries with sustainable development. Geographic information Systems (GIS) and Multi Criteria Decision Making (MCDM) are combined to distinctly identify the suitable zones for industries. Six criteria in Analytical hierarchy Process (AHP) and nine criteria in Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) are evaluated by spatial analysis using ArcGIS software. Considerable amount of productive agricultural land is diverted to non agricultural purposes during last 12 years i.e., from 2004&amp;ndash;2016, which is at first taken for industrial establishment. Results obtained by the methodology we followed have given considerable accuracy by cross checking the previously established industries with suitability regions. Thus GIS and MCDM can assist the policy makers and planning officials to get a better overview with the resources they possess to carry forward with less damage to environment and agricultural land.</p>


Author(s):  
Aaike De Wever ◽  
Astrid Schmidt-Kloiber ◽  
Vanessa Bremerich ◽  
Joerg Freyhof

Understanding biodiversity change and addressing questions in freshwater management and conservation requires access to biodiversity data and information. Unfortunately, large, comprehensive data sources on freshwater ecology and biodiversity are largely lacking. In this chapter, we explain how to take advantage of secondary data and improve data availability for supporting freshwater ecology research and biodiversity conservation. We emphasise the importance of secondary data, give an overview of existing databases (e.g., taxonomy, molecular or occurrence databases), discuss problems in understanding and caveats when using such data, and explain the need to make primary data publicly available.


2020 ◽  
Vol 117 (34) ◽  
pp. 20363-20371
Author(s):  
Nils Chr. Stenseth ◽  
Mark R. Payne ◽  
Erik Bonsdorff ◽  
Dorothy J. Dankel ◽  
Joël M. Durant ◽  
...  

The ocean is a lifeline for human existence, but current practices risk severely undermining ocean sustainability. Present and future social−ecological challenges necessitate the maintenance and development of knowledge and action by stimulating collaboration among scientists and between science, policy, and practice. Here we explore not only how such collaborations have developed in the Nordic countries and adjacent seas but also how knowledge from these regions contributes to an understanding of how to obtain a sustainable ocean. Our collective experience may be summarized in three points: 1) In the absence of long-term observations, decision-making is subject to high risk arising from natural variability; 2) in the absence of established scientific organizations, advice to stakeholders often relies on a few advisors, making them prone to biased perceptions; and 3) in the absence of trust between policy makers and the science community, attuning to a changing ocean will be subject to arbitrary decision-making with unforeseen and negative ramifications. Underpinning these observations, we show that collaboration across scientific disciplines and stakeholders and between nations is a necessary condition for appropriate actions.


2018 ◽  
Vol 2 ◽  
pp. e26367
Author(s):  
Yvette Umurungi ◽  
Samuel Kanyamibwa ◽  
Faustin Gashakamba ◽  
Beth Kaplin

Freshwater biodiversity is critically understudied in Rwanda, and to date there has not been an efficient mechanism to integrate freshwater biodiversity information or make it accessible to decision-makers, researchers, private sector or communities, where it is needed for planning, management and the implementation of the National Biodiversity Strategy and Action Plan (NBSAP). A framework to capture and distribute freshwater biodiversity data is crucial to understanding how economic transformation and environmental change is affecting freshwater biodiversity and resulting ecosystem services. To optimize conservation efforts for freshwater ecosystems, detailed information is needed regarding current and historical species distributions and abundances across the landscape. From these data, specific conservation concerns can be identified, analyzed and prioritized. The purpose of this project is to establish and implement a long-term strategy for freshwater biodiversity data mobilization, sharing, processing and reporting in Rwanda. The expected outcome of the project is to support the mandates of the Rwanda Environment Management Authority (REMA), the national agency in charge of environmental monitoring and the implementation of Rwanda’s NBSAP, and the Center of Excellence in Biodiversity and Natural Resources Management (CoEB). The project also aligns with the mission of the Albertine Rift Conservation Society (ARCOS) to enhance sustainable management of natural resources in the Albertine rift region. Specifically, organizational structure, technology platforms, and workflows for the biodiversity data capture and mobilization are enhanced to promote data availability and accessibility to improve Rwanda’s NBSAP and support other decision-making processes. The project is enhancing the capacity of technical staff from relevant government and non-government institutions in biodiversity informatics, strengthening the capacity of CoEB to achieve its mission as the Rwandan national biodiversity knowledge management center. Twelve institutions have been identified as data holders and the digitization of these data using Darwin Core standards is in progress, as well as data cleaning for the data publication through the ARCOS Biodiversity Information System (http://arbmis.arcosnetwork.org/). The release of the first national State of Freshwater Biodiversity Report is the next step. CoEB is a registered publisher to the Global Biodiversity Information Facility (GBIF) and holds an Integrated Publishing Toolkit (IPT) account on the ARCOS portal. This project was developed for the African Biodiversity Challenge, a competition coordinated by the South African National Biodiversity Institute (SANBI) and funded by the JRS Biodiversity Foundation which supports on-going efforts to enhance the biodiversity information management activities of the GBIF Africa network. This project also aligns with SANBI’s Regional Engagement Strategy, and endeavors to strengthen both emerging biodiversity informatics networks and data management capacity on the continent in support of sustainable development.


2019 ◽  
Vol 35 (1) ◽  
pp. 137-165
Author(s):  
Jack Lothian ◽  
Anders Holmberg ◽  
Allyson Seyb

Abstract The linking of disparate data sets across time, space and sources is probably the foremost current issue facing Central Statistical Agencies (CSA). If one reviews the current literature looking for the prevalent challenges facing CSAs, three issues stand out: 1) using administrative data effectively; 2) big data and what it means for CSAs; and 3) integrating disparate data set (such as health, education and wealth) to provide measurable facts that can guide policy makers. CSAs are being challenged to explore the same kind of challenges faced by Google, Facebook, and Yahoo, which are using graphical/semantic web models for organizing, searching and analysing data. Additionally, time and space (geography) are becoming more important dimensions (domains) for CSAs as they start to explore new data sources and ways to integrate those to study relationships. Central agency methodologists are being pushed to include these new perspectives into their standard theories, practises and policies. Like most methodologists, the authors see surveys and the publications of their results as a process where estimation is the key tool to achieve the final goal of an accurate statistical output. Randomness and sampling exists to support this goal, and early on it was clear to us that the incoming “it-is-what-it-is” data sources were not randomly selected. These sources were obviously biased and thus would produce biased estimates. So, we set out to design a strategy to deal with this issue. This article presents a schema for integrating and linking traditional and non-traditional datasets. Like all survey methodologies, this schema addresses the fundamental issues of representativeness, estimation and total survey error measurement.


2021 ◽  
Author(s):  
lili Zhang ◽  
Himanshu Vashisht ◽  
Andrey Totev ◽  
Nam Trinh ◽  
Tomas Ward

UNSTRUCTURED Deep learning models, especially RNN models, are potentially powerful tools for representing the complex learning processes and decision-making strategies used by humans. Such neural network models make fewer assumptions about the underlying mechanisms thus providing experimental flexibility in terms of applicability. However this comes at the cost of requiring a larger number of tunable parameters requiring significantly more training and representative data for effective learning. This presents practical challenges given that most computational modelling experiments involve relatively small numbers of subjects, which while adequate for conventional modelling using low dimensional parameter spaces, leads to sub-optimal model training when adopting deeper neural network approaches. Laboratory collaboration is a natural way of increasing data availability however, data sharing barriers among laboratories as necessitated by data protection regulations encourage us to seek alternative methods for collaborative data science. Distributed learning, especially federated learning, which supports the preservation of data privacy, is a promising method for addressing this issue. To verify the reliability and feasibility of applying federated learning to train neural networks models used in the characterisation of human decision making, we conducted experiments on a real-world, many-labs data pool including experimentally significant data-sets from ten independent studies. The performance of single models that were trained on single laboratory data-sets was poor, especially those with small numbers of subjects. This unsurprising finding supports the need for larger and more diverse data-sets to train more generalised and reliable models. To that end we evaluated four collaborative approaches for comparison purposes. The first approach represents conventional centralized data sharing (CL-based) and is the optimal approach but requires complete sharing of data which we wish to avoid. The results however establish a benchmark for the other three distributed approaches; federated learning (FL-based), incremental learning (IL-based), and cyclic incremental learning (CIL-based). We evaluate these approaches in terms of prediction accuracy and capacity to characterise human decision-making strategies in the context of the computational modelling experiments considered here. The results demonstrate that the FL-based model achieves performance most comparable to that of a centralized data sharing approach. This demonstrate that federated learning has value in scaling data science methods to data collected in computational modelling contexts in circumstances where data sharing is not convenient, practical or permissible.


2021 ◽  
Vol 4 (1) ◽  
pp. 64-76
Author(s):  
Praja Bhakta Shrestha ◽  
Gangadhar Chaudhary

Disaster, a serious disruption in functioning of society whether by natural or manmade cause can happen anywhere. Devastating seismic, hurricane, flood, drought and fire are major disaster. Mitigating disaster risk, prompt rescue and timely evacuation decision during such disaster can prevent loss of lives and properties. The evacuation decision is the choice of people to stay away from the area of risk. The study analyzes the people’s perception of evacuation decisions in a flood disaster in the Saptari district of Nepal affected by Koshi River and other tributaries of it as a Disaster Risk Management. According to United Nations (2016), Management refers to “the organization, planning and applications of measure preparing for, responding to and recovering form disasters”. From the flood-affected site, 246 people were randomly selected for this study and examined the factors influencing evacuation decision-making. The study analyzes the past experiences of the people and their perception. The study has explored that Gender, Destination of evacuation, warning condition, reasons for not evacuating, education, age, proximity to the River from residence, land ownership, the capacity of the people are the factors examined and found no any association with the people’s decision on evacuation during the flood disaster in affected areas in Saptari district. These findings help the student, Disaster Risk Reduction field, Government policy makers and different actors to minimize the loss of lives and properties. The study also recommends for future research on victim’s evacuation decision-making capability in different flood-prone area of Nepal.


2021 ◽  
Author(s):  
lili Zhang ◽  
Himanshu Vashisht ◽  
Andrey Totev ◽  
Nam Trinh ◽  
Tomas Ward

Deep learning models, especially RNN models, are potentially powerful tools for representing the complex learning processes and decision-making strategies used by humans. Such neural network models make fewer assumptions about the underlying mechanisms thus providing experimental flexibility in terms of applicability. However this comes at the cost of requiring a larger number of tunable parameters requiring significantly more training and representative data for effective learning. This presents practical challenges given that most computational modelling experiments involve relatively small numbers of subjects, which while adequate for conventional modelling using low dimensional parameter spaces, leads to sub-optimal model training when adopting deeper neural network approaches. Laboratory collaboration is a natural way of increasing data availability however, data sharing barriers among laboratories as necessitated by data protection regulations encourage us to seek alternative methods for collaborative data science. Distributed learning, especially federated learning, which supports the preservation of data privacy, is a promising method for addressing this issue. To verify the reliability and feasibility of applying federated learning to train neural networks models used in the characterisation of human decision making, we conducted experiments on a real-world, many-labs data pool including experimentally significant data-sets from ten independent studies. The performance of single models that were trained on single laboratory data-sets was poor, especially those with small numbers of subjects. This unsurprising finding supports the need for larger and more diverse data-sets to train more generalised and reliable models. To that end we evaluated four collaborative approaches for comparison purposes. The first approach represents conventional centralized data sharing (CL-based) and is the optimal approach but requires complete sharing of data which we wish to avoid. The results however establish a benchmark for the other three distributed approaches; federated learning (FL-based), incremental learning (IL-based), and cyclic incremental learning (CIL-based). We evaluate these approaches in terms of prediction accuracy and capacity to characterise human decision-making strategies in the context of the computational modelling experiments considered here. The results demonstrate that the FL-based model achieves performance most comparable to that of a centralized data sharing approach. This demonstrate that federated learning has value in scaling data science methods to data collected in computational modelling contexts in circumstances where data sharing is not convenient, practical or permissible.


2020 ◽  
Author(s):  
Lucy Johnston ◽  
David AG Henderson ◽  
Jo Hockley ◽  
Susan D Shenkin

Care homes collect a large amount of data about their residents, and the care provided, but there is a lack of consistency in how this information is collected. There is also a need to minimise the burden of data collection on staff, ensure information informs and supports person-centred care, and that this data is then of use to regulatory agencies, policy makers and researchers. We examined the data collected in six Care Homes in Lothian, Scotland. We extracted the meta-data collected, cross-referenced definitions and assessed the degree of current harmonisation between individual care homes and with data sets currently in use in Scotland and internationally. We interviewed the care home managers to identify data collection processes, views and experiences of current data availability, gaps, access and issues of capacity and capability in relation to data management and analytics. Our work has illustrated the scale of the data collected by care homes, the varied formats and heterogeneity of scope and definition. The inventory of 15 core data items that emerged, serves to expose in detail the foundations of care home data sets. The groundwork illuminated the heterogeneity in tools and assessments used to generate the data and the way in which the data is to be used, affects how it is specified and frequency of collection. By making known the reality of how and why care home data is collected, we can understand better the nuances of each individual data item that collectively create a data platform. We make four recommendations for the development of a national care home data platform.


Sign in / Sign up

Export Citation Format

Share Document