scholarly journals Reliability of Data Collected by Volunteers: A Nine-Year Citizen Science Study in the Red Sea

2021 ◽  
Vol 9 ◽  
Author(s):  
Marta Meschini ◽  
Mariana Machado Toffolo ◽  
Chiara Marchini ◽  
Erik Caroselli ◽  
Fiorella Prada ◽  
...  

The quality of data collected by non-professional volunteers in citizen science programs is crucial to render them valid for implementing environmental resources management and protection plans. This study assessed the reliability of data collected by non-professional volunteers during the citizen science project Scuba Tourism for the Environment (STE), carried out in mass tourism facilities of the Red Sea between 2007 and 2015. STE involved 16,164 volunteer recreational divers in data collection on marine biodiversity using a recreational citizen science approach. Through a specifically designed questionnaire, volunteers indicated which of the seventy-two marine taxa surveyed were observed during their recreational dive, giving an estimate of their abundance. To evaluate the validity of the collected data, a reference researcher randomly dived with the volunteers and filled in the project questionnaire separately. Correlation analyses between the records collected by the reference researcher and those collected by volunteers were performed based on 513 validation trials, testing 3,138 volunteers. Data reliability was analyzed through 7 parameters. Consistency showed the lowest mean score (51.6%, 95% Confidence Interval CI 44.1–59.2%), indicating that volunteers could direct their attention to different taxa depending on personal interests; Percent Identified showed the highest mean score (66.7%, 95% CI 55.5–78.0), indicating that volunteers can correctly identify most surveyed taxa. Overall, results confirmed that the recreational citizen science approach can effectively support reliable data for biodiversity monitoring, when carefully tailored for the volunteer skills required by the specific project. The use of a recreational approach enhances massive volunteer participation in citizen science projects, thus increasing the amount of sufficiently reliable data collected in a reduced time.

PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0253763
Author(s):  
Denise Jäckel ◽  
Kim G. Mortega ◽  
Ulrike Sturm ◽  
Ulrich Brockmeyer ◽  
Omid Khorramshahi ◽  
...  

Citizen science is an approach that has become increasingly popular in recent years. Despite this growing popularity, there still is widespread scepticism in the academic world about the validity and quality of data from citizen science projects. And although there might be great potential, citizen science is a rarely used approach in the field of bioacoustics. To better understand the possibilities, but also the limitations, we here evaluated data generated in a citizen science project on nightingale song as a case study. We analysed the quantity and quality of song recordings made in a non-standardized way with a smartphone app by citizen scientists and the standardized recordings made with professional equipment by academic researchers. We made comparisons between the recordings of the two approaches and among the user types of the app to gain insights into the temporal recording patterns, the quantity and quality of the data. To compare the deviation of the acoustic parameters in the recordings with smartphones and professional devices from the original song recordings, we conducted a playback test. Our results showed that depending on the user group, citizen scientists produced many to a lot of recordings of valid quality for further bioacoustic research. Differences between the recordings provided by the citizen and the expert group were mainly caused by the technical quality of the devices used—and to a lesser extent by the citizen scientists themselves. Especially when differences in spectral parameters are to be investigated, our results demonstrate that the use of the same high-quality recording devices and calibrated external microphones would most likely improve data quality. We conclude that many bioacoustic research questions may be carried out with the recordings of citizen scientists. We want to encourage academic researchers to get more involved in participatory projects to harness the potential of citizen science—and to share scientific curiosity and discoveries more directly with society.


2020 ◽  
pp. 1726-1741
Author(s):  
Colin Chapman ◽  
Crona Hodges

This chapter considers the potential for citizen science to contribute to policy development. A background to evidence-based policy making is given, and the requirement for data to be robust, reliable and, increasingly, cost-effective is noted. The potential for the use of ‘co-design' strategies with stakeholders, to add value to their engagement as well as provide more meaningful data that can contribute to policy development, is presented and discussed. Barriers to uptake can be institutional and the quality of data used in evidence-based policy making will always need to be fully assured. Data must be appropriate to the decision making process at hand and there is potential for citizen science to fill important, existing data-gaps.


2018 ◽  
Vol 26 (2) ◽  
pp. 130-147 ◽  
Author(s):  
Elizabeth Cherry

Abstract Sociological research on wildlife typically looks at how nonhuman animals in the wild are hunted, poached, or captured for entertainment, or how they play a symbolic role in people’s lives. Within sociology, little research exists on how people appreciate nonhuman animals in the wild, and how people contribute to wildlife conservation. I explore birding-related citizen science projects in the US. Citizen science refers to scientific projects carried out by amateurs. Literature on citizen science focuses on the perspective of professional scientists, with the assumption that only professional scientists are concerned with the quality of data from citizen science projects. The research showed birders share this skepticism, but they still find satisfaction in participating in citizen science projects. This paper contributes to sociological understandings of wildlife conservation by showing how birders’ participation in citizen science projects helps professional scientists study environmental problems such as climate change and its effects on wildlife.


Author(s):  
Robert Stevenson ◽  
Carl Merrill ◽  
Peter Burn

Each fall from 2017 to 2019, entering Honors students at the University of Massachusetts Boston were invited to attend a 2-day retreat on Thompson Island in Boston Harbor, Boston, Massachusetts, USA. As part of this retreat, students participated in a three-hour bioblitz using the iNaturalist platform. The educational goal of this exercise was to allow the students to observe nature and to participate in a Citizen Science project. These students were generally not science majors and had little or no experience with iNaturalist, and yet during 3 years they made over 2000 biodiversity observations, including over 5700 photographs. Using these data, we addressed the question, “Can naïve observers, using the iNaturalist platform, make useful contributions to our understanding of biodiversity?” For those unfamiliar with the iNaturalist platform, it facilitates this process by encouraging its online community of identifiers to provide species names, thus effectively integrating the collection and identification processes. Observer training: A National Park Service educational team gave groups of 50 to 75 students a 20 to 30 minute introduction to bioblitzes, how to take pictures, especially close-ups with mobile phones, and how to use the iNaturalist app. The students then headed out in one- to four-person groups to preassigned quadrants of the island for 2 to 2.5 hours of observations. Evaluation of Observations: iNaturalist evaluates observations with a three category system of “Casual”, “Needs Id” and “Research Grade”. In addition to the iNaturalist ratings we evaluated other characteristics of the observations: We tallied the number of photographs per observation and developed a rubric to score the quality of images as good, OK, or poor. We identified whether or not the observer tried to identify the species being observed, and scored observations as to whether we thought an identification to species or genus was possible. We totaled the number of observations that were identified to the species and genus level by August 1st, 2020. Finally we evaluated the spatial quality of the observations. We tallied the number of photographs per observation and developed a rubric to score the quality of images as good, OK, or poor. We identified whether or not the observer tried to identify the species being observed, and scored observations as to whether we thought an identification to species or genus was possible. We totaled the number of observations that were identified to the species and genus level by August 1st, 2020. Finally we evaluated the spatial quality of the observations. Results: Over 50% of the observations were of plants and 40% of animals, mostly arthropods and mollusks. The remaining 10% were of fungi and seaweeds. A total of 202 unique species were identified from the student bioblitzes. The proportion of species common to each year was 19%. Forty-seven percent of the observations (945) were identified to species level but only 2/3 of these (687) were confirmed by others to make them “research grade”. Fifty-eight percent of the observations included three or four images, and 31% were judged to be of good quality, 54% OK and 15% poor. We thought that the majority of the observations were identifiable to species or genus level (64%), and in 26% of the observations, our expertise was insufficient to be confident of an identification. We scored the final 10% of the observations as unidentifiable. The location data for most of the observations met our expectations in that marine species were located on the periphery of the island and terrestrial species were found over land, concentrated along island pathways. However, we found about 2.7% of the observations did not make it into the official iNaturalist project because of errors in the GPS coordinates, sometimes placing the observation miles away. All observations were made on Thompson Island but 60 different place names were given for the 2000+ observations. Discussion: A year-long biodiveristy inventory of the Boston Harbor Islands using the iNaturalist approach and completed in 2017 found 475 species. The 202 species identified (by students and identifiers) on Thompson Island are a signficant contribution considering the short, late summer sampling period. The short field experience with naïve observers contributes to the relatively low (19%) proportion of species in common among the three years. The students were predictably attracted to species that were easily photographed e.g., did not move or were of the right size. Examples include herbs and shrubs that were flowering or fruiting, oysters, mussels, snail shells, and insects such as butterflies. The instructors encouraged the students to take photographs of the whole organism and its parts, but some images were out of focus or did not capture details essential for identification. We expected that using GPS technology within miles of downtown Boston would lead to precise and accurate species locations and that was what we found. However, the errors associated with an observation can be large, and 2.7% of observations that should have been included in the project were initially not. Conclusions: This bioblitz exercise was designed with an educational objective: to give college freshman from the city the opportunity to observe nature and partake in a citizen science project. We conclude that a short instruction period provided to naïve users armed with a digital native’s expertise usingsmart phones allowed them to collect observations that the iNaturalist community of species identifiers was able to turn into quality biodiversity observations. The students’ observations are building a record that can be mined by scientists to answer a variety of questions.


2021 ◽  
Vol 13 (17) ◽  
pp. 9925
Author(s):  
Maria Panitsa ◽  
Nikolia Iliopoulou ◽  
Emmanouil Petrakis

Citizen science can serve as a tool to address environmental and conservation issues. Ιn the framework of Erasmus+ project CS4ESD, this study focuses on promoting the importance of plants and plant species and communities’ diversity by using available web-based information because of Covid-19 limitations and concerning the case study of Olympus mountain Biosphere Reserve (Greece). A questionnaire was designed to collect the necessary information, aiming to investigate pupils’ and students’ willing to distinguish and learn more about plant species and communities and evaluate information found on the web. Pupils, students, and experts participated in this study. The results are indicative of young citizens’ ability to evaluate environmental issues. They often underestimate plant species richness, endemism, plant communities, the importance of plants, and ecosystem services. They also use environmental or plant-based websites and online available data in a significantly different way than experts. The age of the young citizens is a factor that may affect the quality of data. The essential issue of recognizing the importance of plants and plant communities and of assisting for their conservation is highlighted. Education for sustainable development is one of the most important tools that facilitates environmental knowledge and enhances awareness.


Author(s):  
Colin Chapman ◽  
Crona Hodges

This chapter considers the potential for citizen science to contribute to policy development. A background to evidence-based policy making is given, and the requirement for data to be robust, reliable and, increasingly, cost-effective is noted. The potential for the use of ‘co-design' strategies with stakeholders, to add value to their engagement as well as provide more meaningful data that can contribute to policy development, is presented and discussed. Barriers to uptake can be institutional and the quality of data used in evidence-based policy making will always need to be fully assured. Data must be appropriate to the decision making process at hand and there is potential for citizen science to fill important, existing data-gaps.


COVID ◽  
2021 ◽  
Vol 1 (1) ◽  
pp. 137-152
Author(s):  
Noah Farhadi ◽  
Hooshang Lahooti

When it comes to COVID-19, access to reliable data is vital. It is crucial for the scientific community to use data reported by independent territories worldwide. This study evaluates the reliability of the pandemic data disclosed by 182 countries worldwide. We collected and assessed conformity of COVID-19 daily infections, deaths, tests, and vaccinations with Benford’s law since the beginning of the coronavirus pandemic. It is commonly accepted that the frequency of leading digits of the pandemic data shall conform to Benford’s law. Our analysis of Benfordness elicits that most countries partially distributed reliable data over the past eighteen months. Notably, the UK, Australia, Spain, Israel, and Germany, followed by 22 different nations, provided the most reliable COVID-19 data within the same period. In contrast, twenty-six nations, including Tajikistan, Belarus, Bangladesh, and Myanmar, published less reliable data on the coronavirus spread. In this context, over 31% of countries worldwide seem to have improved reliability. Our measurement of Benfordness moderately correlates with Johns Hopkin’s Global Health Security Index, suggesting that the quality of data may depend on national healthcare policies and systems. We conclude that economically or politically distressed societies have declined in conformity to the law over time. Our results are particularly relevant for policymakers worldwide.


2001 ◽  
Vol 1779 (1) ◽  
pp. 162-172 ◽  
Author(s):  
Robert A. Scopatz

In 1999–2000, the Automobile Association of America Foundation for Traffic Safety conducted a research program to identify the barriers to analysis of large-truck safety experience in the United States. The primary focus was on so-called longer combination vehicles (LCVs)—the doubles and triples running on major highways throughout the country. Five states (Florida, Idaho, Nevada, Oregon, and Utah) participated in a review and evaluation of their data-collection and analysis practices. Two of the states (Oregon and Utah) also participated in an audit of completed crash reports for crashes involving large trucks and specifically doubles and triples. The results show that none of the five states has a crash-reporting system that adequately supports the analysis of LCV safety. In general, there is a lack of reliable data on the exact configuration of vehicles involved in crashes and a lack of specific measures of exposure for LCVs. Without good data on configuration and good measures of exposure, the main question about LCV safety (i.e., are they more or less safe than other large commercial motor vehicles?) cannot be answered empirically. The report concludes with a series of recommendations for improving the quality of data for crashes involving large trucks and a state’s ability to analyze LCV crashes specifically.


Author(s):  
Peter Brenton

Whether community created and driven, or developed and run by researchers, most citizen science projects operate on minimalistic budgets, their capacity to invest in fully featured bespoke software and databases is usually very limited. Further, the increasing number of applications and citizen science options available for public participation creates a confusing situation to navigate. Cloud-based platforms such as BioCollect, iNaturalist, eBird, CitSci.org, and Zooniverse, provide an opportunity for citizen science projects to leverage highly featured functional software capabilities at a fraction of the cost of developing their own, as well as a common channel through which the public can find and access projects. These platforms are also excellent vehicles to facilitate the implementation of data and metadata standards, which streamline interoperability and data sharing. Such services can also embed measures in their design, which uplift the descriptions and quality of data outputs, significantly amplifying their usability and value. In this presentation I outline the experiences of the Atlas of Living Australia on these issues and demonstrate how we are tackling them with the BioCollect and iNaturalist platforms. We also consider the differences and similarities of these two platforms with respect to standards and data structures in relation to suitability for different use cases. You are invited to join a discussion on approaches being adopted and offer insights for improved outcomes.


2018 ◽  
Author(s):  
Florian Heigl ◽  
Daniel Dörler ◽  
Pamela Bartar ◽  
Robert Brodschneider ◽  
Marika Cieslinski ◽  
...  

The platform Österreich forscht (www.citizen-science.at) was founded in 2014 with the objectives of (1) connecting citizen science actors in Austria, (2) providing the broadest possible overview of citizen science projects in Austria, and (3) scientifically advancing citizen science as a method.Following the initiative of the platform Österreich forscht, many of the institutions that are active in citizen science joined forces in the Citizen Science Network Austria in 2017, and thus agreed to advance the quality of citizen science in Austria (http://www.citizen-science.at/the-platform/the-network).An important step in this regard was the establishment of transparent criteria for projects wishing to be listed on the platform Österreich forscht. The objective of these criteria is to maintain and further improve the quality of the projects presented on the platform.Between March 2017 and February 2018, a working group of the platform Österreich forscht consisting of representatives from 17 institutions developed criteria that allow for the transparent evaluation of projects applying to be listed on Österreich forscht. This was a multi-stage process, building both on the knowledge of the working group members as well as on feedback repeatedly provided by external experts from the respective research fields. Throughout October 2017, a version of the quality criteria was available for public online consultation on the platform Österreich forscht, so as to incorporate the knowledge of the general public into the criteria as well.The final version of the quality criteria was presented at the 4th Austrian Citizen Science Conference, 1-3 February 2018, at which point the criteria also came into effect. Projects already listed on Österreich forscht can adapt to meet the criteria over the next year. Projects wishing to be newly listed on Österreich forscht must meet these criteria at the point of listing.Where necessary, the quality criteria will be adapted in the future, in order to respond to new challenges and developments. The version number, i.e. which version of the criteria a project corresponds to, will be indicated on the respective project page.The first part of the criteria is primarily aimed at establishing what defines a citizen science project. Here, we decided on a negative list (i.e. projects that are NOT citizen science), in order to be as open as possible to different concepts and disciplines. This implies that we call all projects citizen science, which are not excluded by this negative list. The professional background of the person leading the project is not crucial as long as the criteria are complied by the project.The criteria in the second part are to be understood as minimum standards which all projects listed on the platform Österreich forscht must fulfill.The evaluation will be carried out by the coordinators of the platform Österreich forscht in consultation with working group members.Version 1.0 of the quality criteria can be found on the platform Zenodo: https://zenodo.org/record/1161953


Sign in / Sign up

Export Citation Format

Share Document