A Framework for Parallel Assessment of Reputation Management Systems

Author(s):  
Vincenzo Agate ◽  
Alessandra De Paola ◽  
Salvatore Gaglio ◽  
Giuseppe Lo Re ◽  
Marco Morana
2021 ◽  
Vol 37 (1-4) ◽  
pp. 1-30
Author(s):  
Vincenzo Agate ◽  
Alessandra De Paola ◽  
Giuseppe Lo Re ◽  
Marco Morana

Multi-agent distributed systems are characterized by autonomous entities that interact with each other to provide, and/or request, different kinds of services. In several contexts, especially when a reward is offered according to the quality of service, individual agents (or coordinated groups) may act in a selfish way. To prevent such behaviours, distributed Reputation Management Systems (RMSs) provide every agent with the capability of computing the reputation of the others according to direct past interactions, as well as indirect opinions reported by their neighbourhood. This last point introduces a weakness on gossiped information that makes RMSs vulnerable to malicious agents’ intent on disseminating false reputation values. Given the variety of application scenarios in which RMSs can be adopted, as well as the multitude of behaviours that agents can implement, designers need RMS evaluation tools that allow them to predict the robustness of the system to security attacks, before its actual deployment. To this aim, we present a simulation software for the vulnerability evaluation of RMSs and illustrate three case studies in which this tool was effectively used to model and assess state-of-the-art RMSs.


2015 ◽  
Vol 23 (1) ◽  
pp. 81-110 ◽  
Author(s):  
Eva ZUPANCIC ◽  
Denis TRCEK

Trust is essential to economic efficiency. Trading partners choose each other and make decisions based on how much they trust one another. The way to assess trust in e-commerce is different from those in brick and mortar businesses, as there are limited indicators available in online environments. One way is to deploy trust and reputation management systems that are based on collecting feedbacks about partners’ transactions. One of the problems within such systems is the presence of unfair ratings. In this paper, an innovative QADE trust model is presented, which assumes the existence of unfairly reported trust assessments. Subjective nature of trust is considered, where differently reported trust values do not necessarily mean false trust values but can also imply differences in dispositions to trust. The method to identify and filter out the presumably false values is defined. In our method, a trust evaluator finds other agents in society that are similar to him, taking into account pairwise similarity of trust values and similarity of agents’ general mindsets. In order to reduce the effect of unfair ratings, the values reported by the non-similar agents are excluded from the trust computation. Simulations have been used to compare the effectiveness of algorithms to decrease the effect of unfair ratings. The simulations have been carried out in environments with varying number of attackers and targeted agents, as well as with different kinds of attackers. The results showed significant improvements of our proposed method. On average 6% to 13% more unfair trust ratings have been detected by our method. Unfair rating effects on trust assessment were reduced with average improvements from 26% to 57% compared to the other most effective filtering methods by Whitby and Teacy.


Sign in / Sign up

Export Citation Format

Share Document