SPTM-EC: A Security and Privacy-Preserving Task Management in Edge Computing for IIoT

Author(s):  
A. S. M. Sanwar Hosen ◽  
P K Sharma ◽  
In-Ho Ra ◽  
Gi Hwan Cho
IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 18209-18237 ◽  
Author(s):  
Jiale Zhang ◽  
Bing Chen ◽  
Yanchao Zhao ◽  
Xiang Cheng ◽  
Feng Hu

2019 ◽  
Vol 2019 ◽  
pp. 1-17 ◽  
Author(s):  
Upul Jayasinghe ◽  
Gyu Myoung Lee ◽  
Áine MacDermott ◽  
Woo Seop Rhee

Recent advancements in the Internet of Things (IoT) has enabled the collection, processing, and analysis of various forms of data including the personal data from billions of objects to generate valuable knowledge, making more innovative services for its stakeholders. Yet, this paradigm continuously suffers from numerous security and privacy concerns mainly due to its massive scale, distributed nature, and scarcity of resources towards the edge of IoT networks. Interestingly, blockchain based techniques offer strong countermeasures to protect data from tampering while supporting the distributed nature of the IoT. However, the enormous amount of energy consumption required to verify each block of data make it difficult to use with resource-constrained IoT devices and with real-time IoT applications. Nevertheless, it can expose the privacy of the stakeholders due to its public ledger system even though it secures data from alterations. Edge computing approaches suggest a potential alternative to centralized processing in order to populate real-time applications at the edge and to reduce privacy concerns associated with cloud computing. Hence, this paper suggests the novel privacy preserving blockchain called TrustChain which combines the power of blockchains with trust concepts to eliminate issues associated with traditional blockchain architectures. This work investigates how TrustChain can be deployed in the edge computing environment with different levels of absorptions to eliminate delays and privacy concerns associated with centralized processing and to preserve the resources in IoT networks.


2021 ◽  
Author(s):  
Longxiang Gao ◽  
Tom H. Luan ◽  
Bruce Gu ◽  
Youyang Qu ◽  
Yong Xiang

Author(s):  
Zhihua Wang ◽  
Chaoqi Guo ◽  
Jiahao Liu ◽  
Jiamin Zhang ◽  
Yongjian Wang ◽  
...  

2021 ◽  
Vol 11 (3-4) ◽  
pp. 1-22
Author(s):  
Qiang Yang

With the rapid advances of Artificial Intelligence (AI) technologies and applications, an increasing concern is on the development and application of responsible AI technologies. Building AI technologies or machine-learning models often requires massive amounts of data, which may include sensitive, user private information to be collected from different sites or countries. Privacy, security, and data governance constraints rule out a brute force process in the acquisition and integration of these data. It is thus a serious challenge to protect user privacy while achieving high-performance models. This article reviews recent progress of federated learning in addressing this challenge in the context of privacy-preserving computing. Federated learning allows global AI models to be trained and used among multiple decentralized data sources with high security and privacy guarantees, as well as sound incentive mechanisms. This article presents the background, motivations, definitions, architectures, and applications of federated learning as a new paradigm for building privacy-preserving, responsible AI ecosystems.


2021 ◽  
pp. 1-10
Author(s):  
Hongyang Li ◽  
Qingfeng Cheng ◽  
Xinghua Li ◽  
Siqi Ma ◽  
Jianfeng Ma

Author(s):  
J. Andrew Onesimu ◽  
Karthikeyan J. ◽  
D. Samuel Joshua Viswas ◽  
Robin D Sebastian

Deep learning is the buzz word in recent times in the research field due to its various advantages in the fields of healthcare, medicine, automobiles, etc. A huge amount of data is required for deep learning to achieve better accuracy; thus, it is important to protect the data from security and privacy breaches. In this chapter, a comprehensive survey of security and privacy challenges in deep learning is presented. The security attacks such as poisoning attacks, evasion attacks, and black-box attacks are explored with its prevention and defence techniques. A comparative analysis is done on various techniques to prevent the data from such security attacks. Privacy is another major challenge in deep learning. In this chapter, the authors presented an in-depth survey on various privacy-preserving techniques for deep learning such as differential privacy, homomorphic encryption, secret sharing, and secure multi-party computation. A detailed comparison table to compare the various privacy-preserving techniques and approaches is also presented.


Sign in / Sign up

Export Citation Format

Share Document