Consistent Bayesian Aggregation

1995 ◽  
Vol 66 (2) ◽  
pp. 313-351 ◽  
Author(s):  
Philippe Mongin
Keyword(s):  
Author(s):  
Matteo Venanzi ◽  
John Guiver ◽  
Gabriella Kazai ◽  
Pushmeet Kohli ◽  
Milad Shokouhi

2014 ◽  
Vol 109 (507) ◽  
pp. 1023-1039 ◽  
Author(s):  
Ke Deng ◽  
Simeng Han ◽  
Kate J. Li ◽  
Jun S. Liu
Keyword(s):  

2019 ◽  
Vol 15 (S341) ◽  
pp. 99-103 ◽  
Author(s):  
Hugh Dickinson ◽  
Lucy Fortson ◽  
Claudia Scarlata ◽  
Melanie Beck ◽  
Mike Walmsley

AbstractLSST and Euclid must address the daunting challenge of analyzing the unprecedented volumes of imaging and spectroscopic data that these next-generation instruments will generate. A promising approach to overcoming this challenge involves rapid, automatic image processing using appropriately trained Deep Learning (DL) algorithms. However, reliable application of DL requires large, accurately labeled samples of training data. Galaxy Zoo Express (GZX) is a recent experiment that simulated using Bayesian inference to dynamically aggregate binary responses provided by citizen scientists via the Zooniverse crowd-sourcing platform in real time. The GZX approach enables collaboration between human and machine classifiers and provides rapidly generated, reliably labeled datasets, thereby enabling online training of accurate machine classifiers. We present selected results from GZX and show how the Bayesian aggregation engine it uses can be extended to efficiently provide object-localization and bounding-box annotations of two-dimensional data with quantified reliability. DL algorithms that are trained using these annotations will facilitate numerous panchromatic data modeling tasks including morphological classification and substructure detection in direct imaging, as well as decontamination and emission line identification for slitless spectroscopy. Effectively combining the speed of modern computational analyses with the human capacity to extrapolate from few examples will be critical if the potential of forthcoming large-scale surveys is to be realized.


2016 ◽  
Vol 119 ◽  
pp. 170-180 ◽  
Author(s):  
Philip Ernst ◽  
Robin Pemantle ◽  
Ville Satopää ◽  
Lyle Ungar

2016 ◽  
Vol 57 ◽  
pp. 195-206 ◽  
Author(s):  
Prateek Tandon ◽  
Peter Huggins ◽  
Rob Maclachlan ◽  
Artur Dubrawski ◽  
Karl Nelson ◽  
...  

1975 ◽  
Vol 8 (6) ◽  
pp. 365-372 ◽  
Author(s):  
James A. DeRuiter ◽  
William R. Ferrell ◽  
Corrine E. Kass

1971 ◽  
Vol 90 (2) ◽  
pp. 300-305 ◽  
Author(s):  
Stuart M. Keeley ◽  
Michael E. Doherty

Author(s):  
Alexandry Augustin ◽  
Matteo Venanzi ◽  
Alex Rogers ◽  
Nicholas R. Jennings

A key problem in crowdsourcing is the aggregation of judgments of proportions. For example, workers might be presented with a news article or an image, and be asked to identify the proportion of each topic, sentiment, object, or colour present in it. These varying judgments then need to be aggregated to form a consensus view of the document’s or image’s contents. Often, however, these judgments are skewed by workers who provide judgments randomly. Such spammers make the cost of acquiring judgments more expensive and degrade the accuracy of the aggregation. For such cases, we provide a new Bayesian framework for aggregating these responses (expressed in the form of categorical distributions) that for the first time accounts for spammers. We elicit 796 judgments about proportions of objects and coloursin images. Experimental results show comparable aggregation accuracy when 60% of the workers are spammers, as other state of the art approaches do when there are no spammers.


Sign in / Sign up

Export Citation Format

Share Document