scholarly journals A Federated Graph Neural Network Framework for Privacy-Preserving Personalization

Author(s):  
Yongfeng Huang ◽  
Chuhan Wu ◽  
Fangzhao Wu ◽  
Lingjuan Lyu ◽  
Tao Qi ◽  
...  

Abstract Graph neural network (GNN) is effective in modeling high-order interactions and has been widely used in various personalized applications such as recommendation. However, mainstream personalization methods rely on centralized GNN learning on global graphs, which have considerable privacy risks due to the privacy-sensitive nature of user data. Here, we present a federated GNN framework named FedGNN for both effective and privacy-preserving personalization. Through a privacy-preserving model update method, we can collaboratively train GNN models based on decentralized graphs inferred from local data. To further exploit graph information beyond local interactions, we introduce a privacy-preserving graph expansion protocol to incorporate high-order information under privacy protection. Experimental results on six datasets for personalization in different scenarios show that FedGNN achieves 4.0%~9.6% lower errors than the state-of-the-art federated personalization methods under good privacy protection. FedGNN provides a novel direction to mining decentralized graph data in a privacy-preserving manner for responsible and intelligent personalization.

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Yang Bai ◽  
Yu Li ◽  
Mingchuang Xie ◽  
Mingyu Fan

In recent years, machine learning approaches have been widely adopted for many applications, including classification. Machine learning models deal with collective sensitive data usually trained in a remote public cloud server, for instance, machine learning as a service (MLaaS) system. In this scene, users upload their local data and utilize the computation capability to train models, or users directly access models trained by MLaaS. Unfortunately, recent works reveal that the curious server (that trains the model with users’ sensitive local data and is curious to know the information about individuals) and the malicious MLaaS user (who abused to query from the MLaaS system) will cause privacy risks. The adversarial method as one of typical mitigation has been studied by several recent works. However, most of them focus on the privacy-preserving against the malicious user; in other words, they commonly consider the data owner and the model provider as one role. Under this assumption, the privacy leakage risks from the curious server are neglected. Differential privacy methods can defend against privacy threats from both the curious sever and the malicious MLaaS user by directly adding noise to the training data. Nonetheless, the differential privacy method will decrease the classification accuracy of the target model heavily. In this work, we propose a generic privacy-preserving framework based on the adversarial method to defend both the curious server and the malicious MLaaS user. The framework can adapt with several adversarial algorithms to generate adversarial examples directly with data owners’ original data. By doing so, sensitive information about the original data is hidden. Then, we explore the constraint conditions of this framework which help us to find the balance between privacy protection and the model utility. The experiments’ results show that our defense framework with the AdvGAN method is effective against MIA and our defense framework with the FGSM method can protect the sensitive data from direct content exposed attacks. In addition, our method can achieve better privacy and utility balance compared to the existing method.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Juan Zhang ◽  
Changsheng Wan ◽  
Chunyu Zhang ◽  
Xiaojun Guo ◽  
Yongyong Chen

To determine whether images on the crowdsourcing server meet the mobile user’s requirement, an auditing protocol is desired to check these images. However, before paying for images, the mobile user typically cannot download them for checking. Moreover, since mobiles are usually low-power devices and the crowdsourcing server has to handle a large number of mobile users, the auditing protocol should be lightweight. To address the above security and efficiency issues, we propose a novel noninteractive lightweight privacy-preserving auditing protocol on images in mobile crowdsourcing networks, called NLPAS. Since NLPAS allows the mobile user to check images on the crowdsourcing server without downloading them, the newly designed protocol can provide privacy protection for these images. At the same time, NLPAS uses the binary convolutional neural network for extracting features from images and designs a novel privacy-preserving Hamming distance computation algorithm for determining whether these images on the crowdsourcing server meet the mobile user’s requirement. Since these two techniques are both lightweight, NLPAS can audit images on the crowdsourcing server in a privacy-preserving manner while still enjoying high efficiency. Experimental results show that NLPAS is feasible for real-world applications.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Baodong Wen ◽  
Yujue Wang ◽  
Yong Ding ◽  
Haibin Zheng ◽  
Hai Liang ◽  
...  

Data supervision is an effective method to ensure the legality of user data on blockchain. However, the massive growth of data makes it difficult to achieve data supervision in existing blockchain applications. Also, data supervision often leads to problems such as disclosure of transaction data and user privacy information. To address these issues, this paper proposes a privacy-preserving blockchain supervision system (BSS) in the multiparty setting, where a supervision chain is introduced to realize data supervision on blockchain. All sensitive information such as user information in the supervising data is encrypted by the attribute-based encryption (ABE) technology, so that both privacy protection and access control on user data can be achieved. Theoretical analysis and comparison show that the proposed BSS scheme is efficient, and experimental analysis indicates the practicality of our BSS scheme.


2021 ◽  
Vol 13 (2) ◽  
pp. 23
Author(s):  
Angeliki Kitsiou ◽  
Eleni Tzortzaki ◽  
Christos Kalloniatis ◽  
Stefanos Gritzalis

Social Networks (SNs) bring new types of privacy risks threats for users; which developers should be aware of when designing respective services. Aiming at safeguarding users’ privacy more effectively within SNs, self-adaptive privacy preserving schemes have been developed, considered the importance of users’ social and technological context and specific privacy criteria that should be satisfied. However, under the current self-adaptive privacy approaches, the examination of users’ social landscape interrelated with their privacy perceptions and practices, is not thoroughly considered, especially as far as users’ social attributes concern. This study, aimed at elaborating this examination in depth, in order as to identify the users’ social characteristics and privacy perceptions that can affect self-adaptive privacy design, as well as to indicate self-adaptive privacy related requirements that should be satisfied for users’ protection in SNs. The study was based on an interdisciplinary research instrument, adopting constructs and metrics from both sociological and privacy literature. The results of the survey lead to a pilot taxonomic analysis for self-adaptive privacy within SNs and to the proposal of specific privacy related requirements that should be considered for this domain. For further establishing of our interdisciplinary approach, a case study scenario was formulated, which underlines the importance of the identified self-adaptive privacy related requirements. In this regard, the study provides further insight for the development of the behavioral models that will enhance the optimal design of self-adaptive privacy preserving schemes in SNs, as well as designers to support the principle of PbD from a technical perspective.


2018 ◽  
Vol 34 (4) ◽  
pp. 316-332 ◽  
Author(s):  
Tuan Nguyen ◽  
Alireza Kashani ◽  
Tuan Ngo ◽  
Stéphane Bordas

2021 ◽  
Vol 10 (3) ◽  
pp. 283-306
Author(s):  
Yannic Meier ◽  
Johanna Schäwel ◽  
Nicole C. Krämer

Using privacy-protecting tools and reducing self-disclosure can decrease the likelihood of experiencing privacy violations. Whereas previous studies found people’s online self-disclosure being the result of privacy risk and benefit perceptions, the present study extended this so-called privacy calculus approach by additionally focusing on privacy protection by means of a tool. Furthermore, it is important to understand contextual differences in privacy behaviors as well as characteristics of privacy-protecting tools that may affect usage intention. Results of an online experiment (N = 511) supported the basic notion of the privacy calculus and revealed that perceived privacy risks were strongly related to participants’ desired privacy protection which, in turn, was positively related to the willingness to use a privacy-protecting tool. Self-disclosure was found to be context dependent, whereas privacy protection was not. Moreover, participants would rather forgo using a tool that records their data, although this was described to enhance privacy protection.


2021 ◽  
Author(s):  
Arulmozhiselvan L ◽  
Uma E ◽  
Jayasri R

Worldwide human health and economic has been affected due to the ongoing pandemic of corona virus (COVID-19). The major COVID-19 challenges are prevention, monitoring and FDA approved vaccines. IOT and cloud computing play vital role in epidemic prevention and blocking COVID-19 transmission. Mostly lungs and hearts are affected. Other than lungs many parts are affected which are not considered as prominent conversational cue. In this paper, we have proposed smart system that is effective through detection of pancreas, kidney and intestine. It detects acute pancreatitis, protein leak, microscopic blood leak, post infectious dysmotility and gastrointestinal bleeding. The data from the edge devices are collected and mapped into the cloud layer. The cloud consists of COVID-19 patients medical records which compare the user data with the existing patient records. Once the data matches it sends warning message to the user regarding the result of affected parts. Based on the result from KPI system, it analyzes with all data and using deep Convolutional Neural Network (CNN) it classifies whether the pancreas, kidney and intestine are affected or not due to COVID-19.


Author(s):  
Houjun Liu

In this experiment, an efficient and accurate network of detecting automatically disseminated (bot) content on social platforms is devised. Through the utilisation of parallel convolutional neural network (CNN) which processes variable n-grams of text 15, 20, and 25 tokens in length encoded by Byte Pair Encoding (BPE), the complexities of linguistic content on social platforms are effectively captured and analysed. With validation on two sets of previously unexposed data, the model was able to achieve an accuracy of around 96.6% and 97.4% respectively — meeting or exceeding the performance of other comparable supervised ML solutions to this problem. Through testing, it is concluded that this method of text processing and analysis proves to be an effective way of classifying potentially artificially synthesized user data — aiding the security and integrity of social platforms.


Sign in / Sign up

Export Citation Format

Share Document