Crowd Label Aggregation Under a Belief Function Framework

Author(s):  
Lina Abassi ◽  
Imen Boukhris
Author(s):  
Jianping Fan ◽  
Jing Wang ◽  
Meiqin Wu

The two-dimensional belief function (TDBF = (mA, mB)) uses a pair of ordered basic probability distribution functions to describe and process uncertain information. Among them, mB includes support degree, non-support degree and reliability unmeasured degree of mA. So it is more abundant and reasonable than the traditional discount coefficient and expresses the evaluation value of experts. However, only considering that the expert’s assessment is single and one-sided, we also need to consider the influence between the belief function itself. The difference in belief function can measure the difference between two belief functions, based on which the supporting degree, non-supporting degree and unmeasured degree of reliability of the evidence are calculated. Based on the divergence measure of belief function, this paper proposes an extended two-dimensional belief function, which can solve some evidence conflict problems and is more objective and better solve a class of problems that TDBF cannot handle. Finally, numerical examples illustrate its effectiveness and rationality.


SpringerPlus ◽  
2016 ◽  
Vol 5 (1) ◽  
Author(s):  
Kaijuan Yuan ◽  
Fuyuan Xiao ◽  
Liguo Fei ◽  
Bingyi Kang ◽  
Yong Deng

Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 875
Author(s):  
Jesus Cerquides ◽  
Mehmet Oğuz Mülâyim ◽  
Jerónimo Hernández-González ◽  
Amudha Ravi Shankar ◽  
Jose Luis Fernandez-Marquez

Over the last decade, hundreds of thousands of volunteers have contributed to science by collecting or analyzing data. This public participation in science, also known as citizen science, has contributed to significant discoveries and led to publications in major scientific journals. However, little attention has been paid to data quality issues. In this work we argue that being able to determine the accuracy of data obtained by crowdsourcing is a fundamental question and we point out that, for many real-life scenarios, mathematical tools and processes for the evaluation of data quality are missing. We propose a probabilistic methodology for the evaluation of the accuracy of labeling data obtained by crowdsourcing in citizen science. The methodology builds on an abstract probabilistic graphical model formalism, which is shown to generalize some already existing label aggregation models. We show how to make practical use of the methodology through a comparison of data obtained from different citizen science communities analyzing the earthquake that took place in Albania in 2019.


Sign in / Sign up

Export Citation Format

Share Document