human computation
Recently Published Documents


TOTAL DOCUMENTS

272
(FIVE YEARS 28)

H-INDEX

21
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Pratheep Kumar Paranthaman ◽  
Anurag Sarkar ◽  
Seth Cooper
Keyword(s):  

2021 ◽  
Vol 8 (2) ◽  
Author(s):  
Frank O. Ostermann ◽  
Laure Kloetzer ◽  
Marisa Ponti ◽  
Sven Schade

This special issue editorial of Human Computation on the topic "Crowd AI for Good" motivates explorations at the intersection of artificial intelligence and citizen science, and introduces a set of papers that exemplify related community activities and new directions in the field.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Stefan Wojcik ◽  
Avleen S. Bijral ◽  
Richard Johnston ◽  
Juan M. Lavista Ferres ◽  
Gary King ◽  
...  

AbstractWhile digital trace data from sources like search engines hold enormous potential for tracking and understanding human behavior, these streams of data lack information about the actual experiences of those individuals generating the data. Moreover, most current methods ignore or under-utilize human processing capabilities that allow humans to solve problems not yet solvable by computers (human computation). We demonstrate how behavioral research, linking digital and real-world behavior, along with human computation, can be utilized to improve the performance of studies using digital data streams. This study looks at the use of search data to track prevalence of Influenza-Like Illness (ILI). We build a behavioral model of flu search based on survey data linked to users’ online browsing data. We then utilize human computation for classifying search strings. Leveraging these resources, we construct a tracking model of ILI prevalence that outperforms strong historical benchmarks using only a limited stream of search data and lends itself to tracking ILI in smaller geographic units. While this paper only addresses searches related to ILI, the method we describe has potential for tracking a broad set of phenomena in near real-time.


Author(s):  
Marcello N. Amorim ◽  
Celso A. S. Santos ◽  
Orivaldo L. Tavares

Video annotation is an activity that aims to supplement this type of multimedia object with additional content or information about its context, nature, content, quality and other aspects. These annotations are the basis for building a variety of multimedia applications for various purposes ranging from entertainment to security. Manual annotation is a strategy that uses the intelligence and workforce of people in the annotation process and is an alternative to cases where automatic methods cannot be applied. However, manual video annotation can be a costly process because as the content to be annotated increases, so does the workload for annotating. Crowdsourcing appears as a viable solution strategy in this con- text because it relies on outsourcing the tasks to a multitude of workers, who perform specific parts of the work in a distributed way. However, as the complexity of required media annoyances increases, it becomes necessary to employ skilled labor, or willing to perform larger, more complicated, and more time-consuming tasks. This makes it challenging to use crowdsourcing, as experts demand higher pay, and recruiting tends to be a difficult activity. In order to overcome this problem, strategies based on the decom- position of the main problem into a set of simpler subtasks suitable for crowdsourcing processes have emerged. These smaller tasks are organized in a workflow so that the execution process can be formalized and controlled. In this sense, this thesis aims to present a new framework that allows the use of crowdsourcing to create applications that require complex video annotation tasks. The developed framework considers the whole process from the definition of the problem and the decomposition of the tasks, until the construction, execution, and management of the workflow. This framework, called CrowdWaterfall, contemplates the strengths of current proposals, incorporating new concepts, techniques, and resources to overcome some of its limitations.


Author(s):  
Jiyi Li ◽  
Yasushi Kawase ◽  
Yukino Baba ◽  
Hisashi Kashima

Quality assurance is one of the most important problems in crowdsourcing and human computation, and it has been extensively studied from various aspects. Typical approaches for quality assurance include unsupervised approaches such as introducing task redundancy (i.e., asking the same question to multiple workers and aggregating their answers) and supervised approaches such as using worker performance on past tasks or injecting qualification questions into tasks in order to estimate the worker performance. In this paper, we propose to utilize the worker performance as a global constraint for inferring the true answers. The existing semi-supervised approaches do not consider such use of qualification questions. We also propose to utilize the constraint as a regularizer combined with existing statistical aggregation methods. The experiments using heterogeneous multiple-choice questions demonstrate that the performance constraint not only has the power to estimate the ground truths when used by itself, but also boosts the existing aggregation methods when used as a regularizer.


Sign in / Sign up

Export Citation Format

Share Document