Human Computation
Latest Publications


TOTAL DOCUMENTS

100
(FIVE YEARS 33)

H-INDEX

7
(FIVE YEARS 2)

Published By "Thinksplash, Llc"

2330-8001, 2330-8001

2021 ◽  
Vol 8 (2) ◽  
pp. 15-32
Author(s):  
Jon Chamberlain ◽  
Benjamin Turpin ◽  
Maged Ali ◽  
Kakia Chatsiou ◽  
Kirsty O'Callaghan

The popularity and ubiquity of social networks has enabled a new form of decentralised online collaboration: groups of users gathering around a central theme and working together to solve problems, complete tasks and develop social connections. Groups that display such `organic collaboration' have been shown to solve tasks quicker and more accurately than other methods of crowdsourcing. They can also enable community action and resilience in response to different events, from casual requests to emergency response and crisis management. However, engaging such groups through formal agencies risks disconnect and disengagement by destabilising motivational structures. This paper explores case studies of this phenomenon, reviews models of motivation that can help design systems to harness these groups and proposes a framework for lightweight engagement using existing platforms and social networks.


2021 ◽  
Vol 8 (2) ◽  
pp. 5-14
Author(s):  
Marisa Ponti ◽  
Laure Kloetzer‬ ◽  
Grant Miller ◽  
Frank O. Ostermann ◽  
Sven Schade

  Responding to the continued and accelerating rise of Machine Learning (ML) in citizen science, we organized a discussion panel at the 3rd European Citizen Science 2020 Conference to initiate a dialogue on how citizen scientists interact and collaborate with algorithms. This brief summarizes a presentation about two Zooniverse projects which illustrated the impact that new developments in ML are having on citizen science projects which involve visual inspection of large datasets. We also share the results of a poll to elicit opinions and ideas from the audience on two statements, one positive and one critical of using ML in CS. The discussion with the participants raised several issues that we grouped into four main themes: a) democracy and participation; b) skill-biased technological change; c) data ownership vs public domain/digital commons, and d) transparency. All these issues warrant further research for those who are concerned about ML in citizen science.  


2021 ◽  
Vol 8 (2) ◽  
pp. 33-53
Author(s):  
Manuel Portela

This paper assesses the use of conversational agents (chatbots) as an interface to enhance communication with participants in citizen science projects. After developing a study of the engagement and motivations to interact with chatbots, we explored our results. We based our analysis on the current needs exposed in citizen science literature to assess the opportunities. We found that chatbots are great communication platforms that can help to engage participants as an all-in-one interface. Chatbots can benefit projects in reducing the need for developing an exclusive app while it can be deployed on several platforms. Finally, we establish design suggestions to help citizen science practitioners to incorporate such platforms to new projects. We encourage the development of more advanced interfaces through the incorporation of Machine Learning to several processes.


2021 ◽  
Vol 8 (2) ◽  
pp. 54-75
Author(s):  
Meredith S. Palmer ◽  
Sarah E. Huebner ◽  
Marco Willi ◽  
Lucy Fortson ◽  
Craig Packer

Camera traps - remote cameras that capture images of passing wildlife - have become a ubiquitous tool in ecology and conservation. Systematic camera trap surveys generate ‘Big Data’ across broad spatial and temporal scales, providing valuable information on environmental and anthropogenic factors affecting vulnerable wildlife populations. However, the sheer number of images amassed can quickly outpace researchers’ ability to manually extract data from these images (e.g., species identities, counts, and behaviors) in timeframes useful for making scientifically-guided conservation and management decisions. Here, we present ‘Snapshot Safari’ as a case study for merging citizen science and machine learning to rapidly generate highly accurate ecological Big Data from camera trap surveys. Snapshot Safari is a collaborative cross-continental research and conservation effort with 1500+ cameras deployed at over 40 eastern and southern Africa protected areas, generating millions of images per year. As one of the first and largest-scale camera trapping initiatives, Snapshot Safari spearheaded innovative developments in citizen science and machine learning. We highlight the advances made and discuss the issues that arose using each of these methods to annotate camera trap data. We end by describing how we combined human and machine classification methods (‘Crowd AI’) to create an efficient integrated data pipeline. Ultimately, by using a feedback loop in which humans validate machine learning predictions and machine learning algorithms are iteratively retrained on new human classifications, we can capitalize on the strengths of both methods of classification while mitigating the weaknesses. Using Crowd AI to quickly and accurately ‘unlock’ ecological Big Data for use in science and conservation is revolutionizing the way we take on critical environmental issues in the Anthropocene era.


2021 ◽  
Vol 8 (2) ◽  
pp. 76-106
Author(s):  
Samreen Anjum ◽  
Ambika Verma ◽  
Brandon Dang ◽  
Danna Gurari

We investigate what, if any, benefits arise from employing hybrid algorithm-crowdsourcing approaches over conventional approaches of relying exclusively on algorithms or crowds to annotate images.  We introduce a framework that enables users to investigate different hybrid workflows for three popular image analysis tasks: image classification, object detection, and image captioning.   Three hybrid approaches are included that are based on having workers: (i) verify predicted labels, (ii) correct predicted labels, and (iii) annotate images for which algorithms have low confidence in their predictions.  Deep learning algorithms are employed in these workflows since they offer high performance for image annotation tasks.  Each workflow is evaluated with respect to annotation quality and worker time to completion on images coming from three diverse datasets (i.e., VOC, MSCOCO, VizWiz). Inspired by our findings, we offer recommendations regarding when and how to employ deep learning with crowdsourcing to achieve desired quality and efficiency for image annotation.


2021 ◽  
Vol 8 (2) ◽  
Author(s):  
Frank O. Ostermann ◽  
Laure Kloetzer ◽  
Marisa Ponti ◽  
Sven Schade

This special issue editorial of Human Computation on the topic "Crowd AI for Good" motivates explorations at the intersection of artificial intelligence and citizen science, and introduces a set of papers that exemplify related community activities and new directions in the field.


2021 ◽  
Vol 8 ◽  
pp. 43-75
Author(s):  
Anoush Margaryan

This paper reports outcomes of a systematic scoping review of methodological approaches and analytical lenses used in empirical research on crowdwork. Over the past decade a growing corpus of publications spanning Social Sciences and Computer Science/HCI have empirically examined the nature of work practices and tasks within crowdwork; surfaced key individual and environmental factors underpinning workers’ decisions to engage in this form of work; developed and implemented tools to improve and extend various aspects of crowdwork, such as the design and allocation of tasks and incentives or workflows within the platforms; and contributed new techniques and know-how on data collection within crowdwork, for example, how to conduct large-scale surveys and experiments in behavioural psychology, economics or education drawing on crowdworker samples. Our initial reading of the crowdwork literature suggested that research had relied on a limited set of relatively narrow methodological approaches, mostly online experiments, surveys and interviews. Importantly, crowdwork research has tended to examine workers’ experiences as snapshots in time rather than studying these longitudinally or contextualising them historically, environmentally and developmentally. This piece-meal approach has given the research community initial descriptions and interpretations of crowdwork practices and provided an important starting point in a nascent field of study. However, the depth of research in the various areas, and the missing pieces, have yet to be systematically scoped out. Therefore, this paper systematically reviews the analytical-methodological approaches used in crowdwork research identifying gaps in these approaches. We argue that to take crowdwork research to the next level it is essential to examine crowdwork practices within the context of both individual and historical-environmental factors impacting it. To this end, methodological approaches that bridge sociological, psychological, individual, collective, online, offline, and temporal processes and practices of crowdwork are needed. The paper proposes the Life Course perspective as an interdisciplinary framework that can help address these gaps and advance research on crowdwork. The paper concludes by proposing a set of Life Course-inspired research questions to guide future studies of crowdwork.


2021 ◽  
Vol 8 ◽  
pp. 25-42
Author(s):  
Lea Shanley ◽  
Pietro Michelucci ◽  
Krystal Tsosie ◽  
George Wyeth ◽  
Julia Kumari Drapkin ◽  
...  

This guest editorial briefly describes a history of activities related to engaging the U.S. federal government in citizen science, and presents the recent public comments that we submitted to the American National Oceanic and Atmospheric Association (NOAA) in response to their recently published draft citizen science strategy.


2021 ◽  
Vol 8 ◽  
Author(s):  
Masaki Kobayashi ◽  
Hiromi Morita ◽  
Masaki Matsubara ◽  
Nobuyuki Shimizu ◽  
Atsuyuki Morishima

Self-correction for crowdsourced tasks is a two-stage setting that allows a crowd worker to review the task results of other workers; the worker is then given a chance to update their results according to the review.Self-correction was proposed as a complementary approach to statistical algorithms, in which workers independently perform the same task.It can provide higher-quality results with low additional costs. However, thus far, the effects have only been demonstrated in simulations, and empirical evaluations are required.In addition, as self-correction provides feedback to workers, an interesting question arises: whether perceptual learning is observed in self-correction tasks.This paper reports our experimental results on self-corrections with a real-world crowdsourcing service.We found that:(1) Self-correction is effective for making workers reconsider their judgments.(2) Self-correction is effective more if workers are shown the task results of higher-quality workers during the second stage.(3) A perceptual learning effect is observed in some cases. Self-correction can provide feedback that shows workers how to provide high-quality answers in future tasks.(4) A Perceptual learning effect is observed, particularly with workers who moderately change answers in the second stage. This suggests that we can measure the learning potential of workers.These findings imply that requesters/crowdsourcing services can construct a positive loop for improved task results by the self-correction approach.However, (5) no long-term effects of the self-correction task were transferred to other similar tasks in two different settings.


2021 ◽  
Vol 8 ◽  
Author(s):  
Masaki Kobayashi ◽  
Hiromi Morita ◽  
Masaki Matsubara ◽  
Nobuyuki Shimizu ◽  
Atsuyuki Morishima

Self-correction for crowdsourced tasks is a two-stage setting that allows a crowd worker to review the task results of other workers; the worker is then given a chance to update their results according to the review.Self-correction was proposed as a complementary approach to statistical algorithms, in which workers independently perform the same task.It can provide higher-quality results with low additional costs. However, thus far, the effects have only been demonstrated in simulations, and empirical evaluations are required.In addition, as self-correction provides feedback to workers, an interesting question arises: whether perceptual learning is observed in self-correction tasks.This paper reports our experimental results on self-corrections with a real-world crowdsourcing service.We found that:(1) Self-correction is effective for making workers reconsider their judgments.(2) Self-correction is effective more if workers are shown the task results of higher-quality workers during the second stage.(3) A perceptual learning effect is observed in some cases. Self-correction can provide feedback that shows workers how to provide high-quality answers in future tasks.(4) A Perceptual learning effect is observed, particularly with workers who moderately change answers in the second stage. This suggests that we can measure the learning potential of workers.These findings imply that requesters/crowdsourcing services can construct a positive loop for improved task results by the self-correction approach.However, (5) no long-term effects of the self-correction task were transferred to other similar tasks in two different settings.


Sign in / Sign up

Export Citation Format

Share Document