scholarly journals Managing Trust and Detecting Malicious Groups in Peer-to-Peer IoT Networks

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4484
Author(s):  
Alanoud Alhussain ◽  
Heba Kurdi ◽  
Lina Altoaimy

Peer-to-peer (P2P) networking is becoming prevalent in Internet of Thing (IoT) platforms due to its low-cost low-latency advantages over cloud-based solutions. However, P2P networking suffers from several critical security flaws that expose devices to remote attacks, eavesdropping and credential theft due to malicious peers who actively work to compromise networks. Therefore, trust and reputation management systems are emerging to address this problem. However, most systems struggle to identify new smart models of malicious peers, especially those who cooperate together to harm other peers. This paper proposes an intelligent trust management system, namely, Trutect, to tackle this issue. Trutect exploits the power of neural networks to provide recommendations on the trustworthiness of each peer. The system identifies the specific model of an individual peer, whether good or malicious. The system also detects malicious collectives and their suspicious group members. The experimental results show that compared to rival trust management systems, Trutect raises the success rates of good peers at a significantly lower running time. It is also capable of accurately identifying the peer model.

Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1442
Author(s):  
Amal Alqahtani ◽  
Heba Kurdi ◽  
Majed Abdulghani

Peer-to-peer (P2P) platforms are gaining increasing popularity due to their scalability, robustness and self-organization. In P2P systems, peers interact directly with each other to share resources or exchange services without a central authority to manage the interaction. However, these features expose P2P platforms to malicious attacks that reduce the level of trust between peers and in extreme situations, may cause the entire system to shut down. Therefore, it is essential to employ a trust management system that establishes trust relationships among peers. Current P2P trust management systems use binary categorization to classify peers as trustworthy or not trustworthy. However, in the real world, trustworthiness is a vague concept; peers have different levels of trustworthiness that affect their overall trust value. Therefore, in this paper, we developed a novel trust management algorithm for P2P platforms based on Hadith science where Hadiths are systematically classified into multiple levels of trustworthiness, based on the quality of narrator and content. To benchmark our proposed system, HadithTrust, we used two state-of-art trust management systems, EigenTrust and InterTrust, with no-trust algorithm as a baseline scenario. Various experimental results demonstrated the superiority of HadithTrust considering eight performance measures.


Author(s):  
Govindaraj Ramya ◽  
Govindaraj Priya ◽  
Chowdhury Subrata ◽  
Dohyeun Kim ◽  
Duc Tan Tran ◽  
...  

<p class="0abstract">The extremely vibrant, scattered, and non–transparent nature of cloud computing formulate trust management a significant challenge. According to scholars the trust and security are the two issues that are in the topmost obstacles for adopting cloud computing. Also, SLA (Service Level Agreement) alone is not necessary to build trust between cloud because of vague and unpredictable clauses. Getting feedback from the consumers is the best way to know the trustworthiness of the cloud services, which will help them improve in the future. Several researchers have stated the necessity of building a robust management system and suggested many ideas to manage trust based on consumers' feedback. This paper has reviewed various reputation-based trust management systems, including trust management in cloud computing, peer-to-peer system, and Adhoc system. </p>


2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Heba Kurdi ◽  
Bushra Alshayban ◽  
Lina Altoaimy ◽  
Shada Alsalamah

Cloud computing plays a major role in smart cities development by facilitating the delivery of various services in an efficient and effective manner. In a Peer-to-Peer (P2P) federated clouds ecosystem, multiple Cloud Service Providers (CSPs) collaborate and share services among them when experiencing a shortage in certain resources. Hence, incoming service requests to this specific resource can be delegated to other members. Nevertheless, the lack of preexisting trust relationship among CSPs in this distributed environment can affect the quality of service (QoS). Therefore, a trust management system is required to assist trustworthy peers in seeking reliable communication partners. We address this challenge by proposing TrustyFeer, a trust management system that allows peers to evaluate the trustworthiness of other peers based on subjective logic opinions, formulated using peers’ reputations and Service Level Agreements (SLAs). To demonstrate the utility of TrustyFeer, we evaluate the performance of our method against two long-standing trust management systems. The simulation results show that TrustyFeer is more robust in decreasing the percentage of services that do not conform to SLAs and increasing the success rate of exchanged services by good CSPs conforming to SLAs. This should provide a trustworthy federated clouds ecosystem for a better, more sustainable future.


2005 ◽  
Vol 12 (23) ◽  
Author(s):  
Karl Krukow ◽  
Mogens Nielsen ◽  
Vladimiro Sassone

In a reputation-based trust-management system, agents maintain information about the past behaviour of other agents. This information is used to guide future trust-based decisions about interaction. However, while trust management is a component in security decision-making, few existing reputation-based trust-management systems aim to provide any formal security-guarantees. We provide a mathematical framework for a class of simple reputation-based systems. In these systems, decisions about interaction are taken based on policies that are exact requirements on agents' past histories. We present a basic declarative language, based on pure-past linear temporal logic, intended for writing simple policies. While the basic language is reasonably expressive, we extend it to encompass more practical policies, including several known from the literature. A naturally occurring problem becomes how to efficiently re-evaluate a policy when new behavioural information is available. Efficient algorithms for the basic language are presented and analyzed, and we outline algorithms for the extended languages as well.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2190
Author(s):  
Fatimah Almuzaini ◽  
Sarah Alromaih ◽  
Alhanoof Althnian ◽  
Heba Kurdi

Online communication platforms face security and privacy challenges, especially in broad ecosystems, such as online social networks, where users are unfamiliar with each other. Consequently, employing trust management systems is crucial to ensuring the trustworthiness of participants, and thus, the content they share in the network. WhatsApp is one of the most popular message-based online social networks with over one billion users worldwide. Therefore, it is considered an attractive platform for cybercriminals who spread malware to gain unauthorized access to users’ accounts to steal their data or corrupt the system. None of the few trust management systems proposed in the online social network literature have considered WhatsApp as a use case. To this end, this paper introduces WhatsTrust, a trust management system for WhatsApp that evaluates the trustworthiness of users. A trust value accompanies each message to help the receiver make an informed decision regarding how to deal with the message. WhatsTrust is extensively evaluated through a strictly controlled empirical evaluation framework with two well-established trust management systems, namely EigenTrust and trust network analysis with subjective logic (TNA-SL) algorithms, as benchmarks. The experimental results demonstrate WhatsTrust’s dominance with respect to the success rate and execution time.


2021 ◽  
Vol 37 (1-4) ◽  
pp. 1-30
Author(s):  
Vincenzo Agate ◽  
Alessandra De Paola ◽  
Giuseppe Lo Re ◽  
Marco Morana

Multi-agent distributed systems are characterized by autonomous entities that interact with each other to provide, and/or request, different kinds of services. In several contexts, especially when a reward is offered according to the quality of service, individual agents (or coordinated groups) may act in a selfish way. To prevent such behaviours, distributed Reputation Management Systems (RMSs) provide every agent with the capability of computing the reputation of the others according to direct past interactions, as well as indirect opinions reported by their neighbourhood. This last point introduces a weakness on gossiped information that makes RMSs vulnerable to malicious agents’ intent on disseminating false reputation values. Given the variety of application scenarios in which RMSs can be adopted, as well as the multitude of behaviours that agents can implement, designers need RMS evaluation tools that allow them to predict the robustness of the system to security attacks, before its actual deployment. To this aim, we present a simulation software for the vulnerability evaluation of RMSs and illustrate three case studies in which this tool was effectively used to model and assess state-of-the-art RMSs.


Sign in / Sign up

Export Citation Format

Share Document