Beyond Micro-Tasks

2019 ◽  
pp. 1403-1428
Author(s):  
Roman Lukyanenko ◽  
Jeffrey Parsons

The emergence of crowdsourcing as an important mode of information production has attracted increasing research attention. In this article, the authors review crowdsourcing research in the data management field. Most research in this domain can be termed tasked-based, focusing on micro-tasks that exploit scale and redundancy in crowds. The authors' review points to another important type of crowdsourcing – which they term observational – that can expand the scope of extant crowdsourcing data management research. Observational crowdsourcing consists of projects that harness human sensory ability to support long-term data acquisition. The authors consider the challenges in this domain, review approaches to data management for crowdsourcing, and suggest directions for future research that bridges the gaps between the two research streams.

2018 ◽  
Vol 29 (1) ◽  
pp. 1-22 ◽  
Author(s):  
Roman Lukyanenko ◽  
Jeffrey Parsons

The emergence of crowdsourcing as an important mode of information production has attracted increasing research attention. In this article, the authors review crowdsourcing research in the data management field. Most research in this domain can be termed tasked-based, focusing on micro-tasks that exploit scale and redundancy in crowds. The authors' review points to another important type of crowdsourcing – which they term observational – that can expand the scope of extant crowdsourcing data management research. Observational crowdsourcing consists of projects that harness human sensory ability to support long-term data acquisition. The authors consider the challenges in this domain, review approaches to data management for crowdsourcing, and suggest directions for future research that bridges the gaps between the two research streams.


Crowdsourcing ◽  
2019 ◽  
pp. 1510-1535
Author(s):  
Roman Lukyanenko ◽  
Jeffrey Parsons

The emergence of crowdsourcing as an important mode of information production has attracted increasing research attention. In this article, the authors review crowdsourcing research in the data management field. Most research in this domain can be termed tasked-based, focusing on micro-tasks that exploit scale and redundancy in crowds. The authors' review points to another important type of crowdsourcing – which they term observational – that can expand the scope of extant crowdsourcing data management research. Observational crowdsourcing consists of projects that harness human sensory ability to support long-term data acquisition. The authors consider the challenges in this domain, review approaches to data management for crowdsourcing, and suggest directions for future research that bridges the gaps between the two research streams.


2010 ◽  
Vol 365 (1555) ◽  
pp. 3177-3186 ◽  
Author(s):  
Abraham J. Miller-Rushing ◽  
Toke Thomas Høye ◽  
David W. Inouye ◽  
Eric Post

Climate change is altering the phenology of species across the world, but what are the consequences of these phenological changes for the demography and population dynamics of species? Time-sensitive relationships, such as migration, breeding and predation, may be disrupted or altered, which may in turn alter the rates of reproduction and survival, leading some populations to decline and others to increase in abundance. However, finding evidence for disrupted relationships, or lack thereof, and their demographic effects, is difficult because the necessary detailed observational data are rare. Moreover, we do not know how sensitive species will generally be to phenological mismatches when they occur. Existing long-term studies provide preliminary data for analysing the phenology and demography of species in several locations. In many instances, though, observational protocols may need to be optimized to characterize timing-based multi-trophic interactions. As a basis for future research, we outline some of the key questions and approaches to improving our understanding of the relationships among phenology, demography and climate in a multi-trophic context. There are many challenges associated with this line of research, not the least of which is the need for detailed, long-term data on many organisms in a single system. However, we identify key questions that can be addressed with data that already exist and propose approaches that could guide future research.


2021 ◽  
Vol 9 (1) ◽  
pp. 81-85
Author(s):  
Eric Kansa ◽  
Sarah Whitcher Kansa

OVERVIEWDigital data play an increasingly important role in how we understand the present and the past. The challenges inherent in understanding and using digital data are as intellectually demanding as any other archaeological research endeavor. For these reasons, data management cannot be regarded as a simple compliance or technical issue. For data to be meaningfully preserved and used in intellectually rigorous ways, they need to be integrated fully into all aspects of archaeological practice, including ethics, teaching, and publishing. In this review, we highlight some of the significant and multifaceted challenges involved in managing data, including documentation, training, methodology, data modeling, trust, and ethical concerns. We then focus on the importance of building data literacy broadly among archaeologists so that we can manage and communicate the data our discipline creates. This involves more than learning to use a new tool or finding a data manager for one's excavation or survey. Long-term, responsible stewardship of data requires understanding the workflows and human roles in data management. Putting effort now into thoughtful data management and broad data-literacy training means we will be able to make the most of the “bigger” data that archaeologists now produce. An important aspect of this reorientation will be to look beyond the boundaries of our own research projects and information systems. Future research, teaching, and public engagement needs will also compel us to explore how our data articulates with wider contexts—within and beyond our discipline.


Shore & Beach ◽  
2020 ◽  
pp. 17-22
Author(s):  
Kathryn Keating ◽  
Melissa Gloekler ◽  
Nancy Kinner ◽  
Sharon Mesick ◽  
Michael Peccini ◽  
...  

This paper presents a summary of collaborative work, lessons learned, and suggestions for next steps in coordinating long-term data management in the Gulf of Mexico in the years following the Deepwater Horizon oil spill (DWH). A decade of increased research and monitoring following the DWH has yielded a vast amount of diverse data collected from response and assessment efforts as well as ongoing restoration efforts. To maximize the benefits of this data through proper management and coordination, a cross-agency and organization Long-Term Data Management (LTDM) working group was established in 2017 with sponsorship from NOAA’s Office of Response and Restoration (OR&R) and NOAA’s National Marine Fisheries Service Restoration Center (NMFS RC) and facilitated by the University of New Hampshire’s Coastal Response Research Center. This paper will describe the LTDM working group’s efforts to foster collaboration, data sharing, and best data management practices among the many state, federal, academic and non-governmental entities working to restore and improve the coastal environment in the Gulf following the DWH. Through collaborative workshops and working groups, participants have helped to characterize region-specific challenges, identify areas for growth, leverage existing connections, and develop recommended actions for stakeholders at all organizational levels who share an interest in data coordination and management activities.


2012 ◽  
Vol 131 (4) ◽  
pp. 3224-3224
Author(s):  
Yuichi Yonemoto ◽  
Masaharu Ohya ◽  
Hiroyuki Imaizumi ◽  
Kazutoshi Fujimoto ◽  
Ken Anai ◽  
...  

2016 ◽  
Vol 37 (4) ◽  
pp. 377-390 ◽  
Author(s):  
Christopher Jackson ◽  
John Stevens ◽  
Shijie Ren ◽  
Nick Latimer ◽  
Laura Bojke ◽  
...  

This article describes methods used to estimate parameters governing long-term survival, or times to other events, for health economic models. Specifically, the focus is on methods that combine shorter-term individual-level survival data from randomized trials with longer-term external data, thus using the longer-term data to aid extrapolation of the short-term data. This requires assumptions about how trends in survival for each treatment arm will continue after the follow-up period of the trial. Furthermore, using external data requires assumptions about how survival differs between the populations represented by the trial and external data. Study reports from a national health technology assessment program in the United Kingdom were searched, and the findings were combined with “pearl-growing” searches of the academic literature. We categorized the methods that have been used according to the assumptions they made about how the hazards of death vary between the external and internal data and through time, and we discuss the appropriateness of the assumptions in different circumstances. Modeling choices, parameter estimation, and characterization of uncertainty are discussed, and some suggestions for future research priorities in this area are given.


Sign in / Sign up

Export Citation Format

Share Document