scholarly journals Differential privacy EV charging data release based on variable window

2021 ◽  
Vol 7 ◽  
pp. e481
Author(s):  
Rixuan Qiu ◽  
Xiong Liu ◽  
Rong Huang ◽  
Fuyong Zheng ◽  
Liang Liang ◽  
...  

In the V2G network, the release and sharing of real-time data are of great value for data mining. However, publishing these data directly to service providers may reveal the privacy of users. Therefore, it is necessary that the data release model with a privacy protection mechanism protects user privacy in the case of data utility. In this paper, we propose a privacy protection mechanism based on differential privacy to protect the release of data in V2G networks. To improve the utility of the data, we define a variable sliding window, which can dynamically and adaptively adjust the size according to the data. Besides, to allocate the privacy budget reasonably in the variable window, we consider the sampling interval and the proportion of the window. Through experimental analysis on real data sets, and comparison with two representative w event privacy protection methods, we prove that the method in this paper is superior to the existing schemes and improves the utility of the data.

2019 ◽  
Vol 2019 (1) ◽  
pp. 26-46 ◽  
Author(s):  
Thee Chanyaswad ◽  
Changchang Liu ◽  
Prateek Mittal

Abstract A key challenge facing the design of differential privacy in the non-interactive setting is to maintain the utility of the released data. To overcome this challenge, we utilize the Diaconis-Freedman-Meckes (DFM) effect, which states that most projections of high-dimensional data are nearly Gaussian. Hence, we propose the RON-Gauss model that leverages the novel combination of dimensionality reduction via random orthonormal (RON) projection and the Gaussian generative model for synthesizing differentially-private data. We analyze how RON-Gauss benefits from the DFM effect, and present multiple algorithms for a range of machine learning applications, including both unsupervised and supervised learning. Furthermore, we rigorously prove that (a) our algorithms satisfy the strong ɛ-differential privacy guarantee, and (b) RON projection can lower the level of perturbation required for differential privacy. Finally, we illustrate the effectiveness of RON-Gauss under three common machine learning applications – clustering, classification, and regression – on three large real-world datasets. Our empirical results show that (a) RON-Gauss outperforms previous approaches by up to an order of magnitude, and (b) loss in utility compared to the non-private real data is small. Thus, RON-Gauss can serve as a key enabler for real-world deployment of privacy-preserving data release.


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2403 ◽  
Author(s):  
Puning Zhang ◽  
Jie Ma

Advances of information and communication technologies in medical areas have led to the emergence of wireless body area network (WBAN). The high accessibility of media in WBAN can easily lead to the malicious tapping or tampering attacks, which may steal privacy data or inject wrong data. However, existing privacy protection mechanisms in WBAN depend on the third-party key management system and have a complex key exchange process. To enhance user privacy at a low cost and with high flexibility, a channel characteristic aware privacy protection mechanism is proposed for WBAN. In the proposed mechanism, the similarity of RSS is measured to authenticate nodes. The key extraction technique can reduce the cost of the key distribution process. Due to the half duplex communication mode of sensors, the biased random sequences are extracted from the RSS of sensor nodes and coordinator. To reduce the inconsistency, we propose the n-dimension quantification and fuzzy extraction, which can quickly encrypt the transmission information and effectively identify malicious nodes. Simulation results show that the proposed mechanism can effectively protect user privacy against tampering and eavesdropping attacks.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Dawei Jiang ◽  
Guoquan Shi

With the close integration of science and technology and health, the broad application prospects of healthy interconnection bring revolutionary changes to health services. Health and medical wearable devices can collect real-time data related to user health, such as user behavior, mood, and sleep, which have great commercial and social value. Healthcare wearable devices, as important network nodes for health interconnection, connect patients and hospitals with the Internet of Things and sensing technology to form a huge medical network. As wearable devices can also collect user data regardless of time and place, uploading data to the cloud can easily make the wearable device’s system vulnerable to attacks and data leakage. Defects in technology can sometimes cause problems such as lack of control over data flow links in wearable devices, and data and privacy leaks are more likely to occur. In this regard, how to ensure the data security and user privacy while using healthcare wearable devices to collect data is a problem worth studying. This article investigates data from healthcare wearable devices, from technical, management, and legal aspects, and studies data security and privacy protection issues for healthcare wearable devices to protect data security and user privacy and promote the sustainable development of the healthcare wearable device industry and the scientific use of data collection.


Author(s):  
Huichuan Liu ◽  
Yong Zeng ◽  
Jiale Liu ◽  
Zhihong Liu ◽  
Jianfeng Ma ◽  
...  

AbstractIn recent years, with the development of mobile terminals, geographic location has attracted the attention of many researchers because of its convenience in collection and its ability to reflect user profile. To protect user privacy, researchers have adopted local differential privacy in data collection process. However, most existing methods assume that location has already been discretized, which we found, if not done carefully, may introduces huge noise, lowering collected result utility. Thus in this paper, we design a differentially private location division module that could automatically discretize locations according to access density of each region. However, as the size of discretized regions may be large, if directly applying existing local differential privacy based attribute method, the overall utility of collected results may be completely destroyed. Thus, we further improve the optimized binary local hash method, based on personalized differential privacy, to collect user visit frequency of each discretized region. This solution improve the accuracy of the collected results while satisfying the privacy of the user’s geographic location. Through experiments on synthetic and real data sets, this paper proves that the proposed method achieves higher accuracy than the best known method under the same privacy budget.


2021 ◽  
Vol 2021 ◽  
pp. 1-22
Author(s):  
Mingzhen Li ◽  
Yunfeng Wang ◽  
Yang Xin ◽  
Hongliang Zhu ◽  
Qifeng Tang ◽  
...  

As a review system, the Crowd-Sourced Local Businesses Service System (CSLBSS) allows users to publicly publish reviews for businesses that include display name, avatar, and review content. While these reviews can maintain the business reputation and provide valuable references for others, the adversary also can legitimately obtain the user’s display name and a large number of historical reviews. For this problem, we show that the adversary can launch connecting user identities attack (CUIA) and statistical inference attack (SIA) to obtain user privacy by exploiting the acquired display names and historical reviews. However, the existing methods based on anonymity and suppressing reviews cannot resist these two attacks. Also, suppressing reviews may result in some reiews with the higher usefulness not being published. To solve these problems, we propose a cross-platform strong privacy protection mechanism (CSPPM) based on the partial publication and the complete anonymity mechanism. In CSPPM, based on the consistency between the user score and the business score, we propose a partial publication mechanism to publish reviews with the higher usefulness of review and filter false or untrue reviews. It ensures that our mechanism does not suppress reviews with the higher usefulness of reviews and improves system utility. We also propose a complete anonymity mechanism to anonymize the display name and avatars of reviews that are publicly published. It ensures that the adversary cannot obtain user privacy through CUIA and SIA. Finally, we evaluate CSPPM from both theoretical and experimental aspects. The results show that it can resist CUIA and SIA and improve system utility.


2021 ◽  
Vol 17 (2) ◽  
pp. 155014772199340
Author(s):  
Xiaohui Li ◽  
Yuliang Bai ◽  
Yajun Wang ◽  
Bo Li

Suppressing the trajectory data to be released can effectively reduce the risk of user privacy leakage. However, the global suppression of the data set to meet the traditional privacy model method reduces the availability of trajectory data. Therefore, we propose a trajectory data differential privacy protection algorithm based on local suppression Trajectory privacy protection based on local suppression (TPLS) to provide the user with the ability and flexibility of protecting data through local suppression. The main contributions of this article include as follows: (1) introducing privacy protection method in trajectory data release, (2) performing effective local suppression judgment on the points in the minimum violation sequence of the trajectory data set, and (3) proposing a differential privacy protection algorithm based on local suppression. In the algorithm, we achieve the purpose Maximal frequent sequence (MFS) sequence loss rate in the trajectory data set by effective local inhibition judgment and updating the minimum violation sequence set, and then establish a classification tree and add noise to the leaf nodes to improve the security of the data to be published. Simulation results show that the proposed algorithm is effective, which can reduce the data loss rate and improve data availability while reducing the risk of user privacy leakage.


PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0248737
Author(s):  
Yaling Zhang ◽  
Jin Han

Fuzzy C-means clustering algorithm is one of the typical clustering algorithms in data mining applications. However, due to the sensitive information in the dataset, there is a risk of user privacy being leaked during the clustering process. The fuzzy C-means clustering of differential privacy protection can protect the user’s individual privacy while mining data rules, however, the decline in availability caused by data disturbances is a common problem of these algorithms. Aiming at the problem that the algorithm accuracy is reduced by randomly initializing the membership matrix of fuzzy C-means, in this paper, the maximum distance method is firstly used to determine the initial center point. Then, the gaussian value of the cluster center point is used to calculate the privacy budget allocation ratio. Additionally, Laplace noise is added to complete differential privacy protection. The experimental results demonstrate that the clustering accuracy and effectiveness of the proposed algorithm are higher than baselines under the same privacy protection intensity.


Computers ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 34 ◽  
Author(s):  
Mashael M. Alsulami ◽  
Arwa Yousef Al-Aama

The high volume of user-generated content caused by the popular use of online social network services exposes users to different kinds of content that can be harmful or unwanted. Solutions to protect user privacy from such unwanted content cannot be generalized due to different perceptions of what is considered as unwanted for each individual. Thus, there is a substantial need to design a personalized privacy protection mechanism that takes into consideration differences in users’ privacy requirements. To achieve personalization, a user attitude about certain content must be acknowledged by the automated protection system. In this paper, we investigate the relationship between user attitude and user behavior among users from the Makkah region in Saudi Arabia to determine the applicability of considering users’ behaviors, as indicators of their attitudes towards unwanted content. We propose a semi-explicit attitude measure to infer user attitude from user-selected examples. Results revealed that semi-explicit attitude is a more reliable attitude measure to represent users’ actual attitudes than self-reported preferences for our sample. In addition, results show a statistically significant relationship between a user’s commenting behavior and the user’s semi-explicit attitude within our sample. Thus, commenting behavior is an effective indicator of the user’s semi-explicit attitude towards unwanted content for a user from the Makkah region in Saudi Arabia. We believe that our findings can have positive implications for designing an effective automated personalized privacy protection mechanism by reproducing the study considering other populations.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Weiqi Zhang ◽  
Guisheng Yin ◽  
Yuhai Sha ◽  
Jishen Yang

The rapid development of the Global Positioning System (GPS) devices and location-based services (LBSs) facilitates the collection of huge amounts of personal information for the untrusted/unknown LBS providers. This phenomenon raises serious privacy concerns. However, most of the existing solutions aim at locating interference in the static scenes or in a single timestamp without considering the correlation between location transfer and time of moving users. In this way, the solutions are vulnerable to various inference attacks. Traditional privacy protection methods rely on trusted third-party service providers, but in reality, we are not sure whether the third party is trustable. In this paper, we propose a systematic solution to preserve location information. The protection provides a rigorous privacy guarantee without the assumption of the credibility of the third parties. The user’s historical trajectory information is used as the basis of the hidden Markov model prediction, and the user’s possible prospective location is used as the model output result to protect the user’s trajectory privacy. To formalize the privacy-protecting guarantee, we propose a new definition, L&A-location region, based on k -anonymity and differential privacy. Based on the proposed privacy definition, we design a novel mechanism to provide a privacy protection guarantee for the users’ identity trajectory. We simulate the proposed mechanism based on a dataset collected in real practice. The result of the simulation shows that the proposed algorithm can provide privacy protection to a high standard.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Weiya Wang ◽  
Geng Yang ◽  
Lin Bao ◽  
Ke Ma ◽  
Hao Zhou ◽  
...  

Now, many application services based on location data have brought a lot of convenience to people’s daily life. However, publishing location data may divulge individual sensitive information. Because the location records about location data may be discrete in the database, some existing privacy protection schemes are difficult to protect location data in data mining. In this paper, we propose a travel trajectory data record privacy protection scheme (TMDP) based on differential privacy mechanism, which employs the structure of a trajectory graph model on location database and frequent subgraph mining based on weighted graph. Time series is introduced into the location data; the weighted trajectory model is designed to obtain the travel trajectory graph database. We upgrade the mining of location data to the mining of frequent trajectory graphs, which can discover the relationship of location data from the database and protect location data mined. In particular, to improve the identification efficiency of frequent trajectory graphs, we design a weighted trajectory graph support calculation algorithm based on canonical code and subgraph structure. Moreover, to improve the data utility under the premise of protecting user privacy, we propose double processes of adding noises to the subgraph mining process by the Laplace mechanism and selecting final data by the exponential mechanism. Through formal privacy analysis, we prove that our TMDP framework satisfies ε -differential privacy. Compared with the other schemes, the experiments show that the data availability of the proposed scheme is higher and the privacy protection of the scheme is effective.


Sign in / Sign up

Export Citation Format

Share Document