scholarly journals PARS: Privacy-Aware Reward System for Mobile Crowdsensing Systems

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7045
Author(s):  
Zhong Zhang ◽  
Dae Hyun Yum ◽  
Minho Shin

Crowdsensing systems have been developed for wide-area sensing tasks because humancarried smartphones are prevailing and becoming capable. To encourage more people to participate in sensing tasks, various incentive mechanisms were proposed. However, participating in sensing tasks and getting rewards can inherently risk the users’ privacy and discourage their participation. In particular, the rewarding process can expose the participants’ sensor data and possibly link sensitive data to their identities. In this work, we propose a privacy-preserving reward system in crowdsensing using the blind signature. The proposed scheme protects the participants’ privacy by decoupling contributions and rewarding claims. Our experiment results show that the proposed mechanism is feasible and efficient.

2018 ◽  
Vol 17 (8) ◽  
pp. 1851-1864 ◽  
Author(s):  
Jian Lin ◽  
Dejun Yang ◽  
Ming Li ◽  
Jia Xu ◽  
Guoliang Xue

2018 ◽  
Vol 17 (3) ◽  
pp. 47-57 ◽  
Author(s):  
Xinglin Zhang ◽  
Lingyu Liang ◽  
Chengwen Luo ◽  
Long Cheng

2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Tao Wan ◽  
Shixin Yue ◽  
Weichuan Liao

Incentive mechanisms are crucial for motivating adequate users to provide reliable data in mobile crowdsensing (MCS) systems. However, the privacy leakage of most existing incentive mechanisms leads to users unwilling to participate in sensing tasks. In this paper, we propose a privacy-preserving incentive mechanism based on truth discovery. Specifically, we use the secure truth discovery scheme to calculate ground truth and the weight of users’ data while protecting their privacy. Besides, to ensure the accuracy of the MCS results, a data eligibility assessment protocol is proposed to remove the sensing data of unreliable users before performing the truth discovery scheme. Finally, we distribute rewards to users based on their data quality. The analysis shows that our model can protect users’ privacy and prevent the malicious behavior of users and task publishers. In addition, the experimental results demonstrate that our model has high performance, reasonable reward distribution, and robustness to users dropping out.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Qi Dou ◽  
Tiffany Y. So ◽  
Meirui Jiang ◽  
Quande Liu ◽  
Varut Vardhanabhuti ◽  
...  

AbstractData privacy mechanisms are essential for rapidly scaling medical training databases to capture the heterogeneity of patient data distributions toward robust and generalizable machine learning systems. In the current COVID-19 pandemic, a major focus of artificial intelligence (AI) is interpreting chest CT, which can be readily used in the assessment and management of the disease. This paper demonstrates the feasibility of a federated learning method for detecting COVID-19 related CT abnormalities with external validation on patients from a multinational study. We recruited 132 patients from seven multinational different centers, with three internal hospitals from Hong Kong for training and testing, and four external, independent datasets from Mainland China and Germany, for validating model generalizability. We also conducted case studies on longitudinal scans for automated estimation of lesion burden for hospitalized COVID-19 patients. We explore the federated learning algorithms to develop a privacy-preserving AI model for COVID-19 medical image diagnosis with good generalization capability on unseen multinational datasets. Federated learning could provide an effective mechanism during pandemics to rapidly develop clinically useful AI across institutions and countries overcoming the burden of central aggregation of large amounts of sensitive data.


Author(s):  
Chuan Zhang ◽  
Liehuang Zhu ◽  
Chang Xu ◽  
Jianbing Ni ◽  
Cheng Huang ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1367
Author(s):  
Raghida El El Saj ◽  
Ehsan Sedgh Sedgh Gooya ◽  
Ayman Alfalou ◽  
Mohamad Khalil

Privacy-preserving deep neural networks have become essential and have attracted the attention of many researchers due to the need to maintain the privacy and the confidentiality of personal and sensitive data. The importance of privacy-preserving networks has increased with the widespread use of neural networks as a service in unsecured cloud environments. Different methods have been proposed and developed to solve the privacy-preserving problem using deep neural networks on encrypted data. In this article, we reviewed some of the most relevant and well-known computational and perceptual image encryption methods. These methods as well as their results have been presented, compared, and the conditions of their use, the durability and robustness of some of them against attacks, have been discussed. Some of the mentioned methods have demonstrated an ability to hide information and make it difficult for adversaries to retrieve it while maintaining high classification accuracy. Based on the obtained results, it was suggested to develop and use some of the cited privacy-preserving methods in applications other than classification.


Author(s):  
Zhihua Wang ◽  
Chaoqi Guo ◽  
Jiahao Liu ◽  
Jiamin Zhang ◽  
Yongjian Wang ◽  
...  

2019 ◽  
Vol 140-141 ◽  
pp. 38-60 ◽  
Author(s):  
Josep Domingo-Ferrer ◽  
Oriol Farràs ◽  
Jordi Ribes-González ◽  
David Sánchez

Sign in / Sign up

Export Citation Format

Share Document