privacy level
Recently Published Documents


TOTAL DOCUMENTS

52
(FIVE YEARS 25)

H-INDEX

7
(FIVE YEARS 2)

2021 ◽  
Vol 6 (4) ◽  
pp. 349-359
Author(s):  
Maysa Abubaker Yousif ◽  
Aniza Abdul Aziz

Visual privacy is one of the vital aspects of Islamic house designs. This paper aimed to analyze the level of visual privacy in the layout of different residential apartment unit samples in Khartoum, Sudan based on Islamic values and Sudanese culture and how modern apartment unit designs respond to these needs. Models included four units from courtyard-villas and two units from apartment buildings. The architectural layout plans, spatial relation, functions, and space zoning were applied to assess the level of visual privacy of each unit. Findings showed that the courtyard-villas had a higher degree of privacy and cultural values, reflecting more of the Sudanese lifestyle than the apartment units, even though the design of the apartment units pays more attention to the nuclear family privacy. This study would assist designers in enhancing the visual privacy in apartment unit layouts by highlighting factors that diminish or enhance the visual privacy level to create appropriate designs for Sudanese Muslims and Muslims in general.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Rong Hu ◽  
Binru Zhang

This paper investigates a constrained distributed optimization problem enabled by differential privacy where the underlying network is time-changing with unbalanced digraphs. To solve such a problem, we first propose a differentially private online distributed algorithm by injecting adaptively adjustable Laplace noises. The proposed algorithm can not only protect the privacy of participants without compromising a trusted third party, but also be implemented on more general time-varying unbalanced digraphs. Under mild conditions, we then show that the proposed algorithm can achieve a sublinear expected bound of regret for general local convex objective functions. The result shows that there is a trade-off between the optimization accuracy and privacy level. Finally, numerical simulations are conducted to validate the efficiency of the proposed algorithm.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 961
Author(s):  
Mijung Park ◽  
Margarita Vinaroz ◽  
Wittawat Jitkrittum

We developed a novel approximate Bayesian computation (ABC) framework, ABCDP, which produces differentially private (DP) and approximate posterior samples. Our framework takes advantage of the sparse vector technique (SVT), widely studied in the differential privacy literature. SVT incurs the privacy cost only when a condition (whether a quantity of interest is above/below a threshold) is met. If the condition is sparsely met during the repeated queries, SVT can drastically reduce the cumulative privacy loss, unlike the usual case where every query incurs the privacy loss. In ABC, the quantity of interest is the distance between observed and simulated data, and only when the distance is below a threshold can we take the corresponding prior sample as a posterior sample. Hence, applying SVT to ABC is an organic way to transform an ABC algorithm to a privacy-preserving variant with minimal modification, but yields the posterior samples with a high privacy level. We theoretically analyzed the interplay between the noise added for privacy and the accuracy of the posterior samples. We apply ABCDP to several data simulators and show the efficacy of the proposed framework.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Asad Khan ◽  
Muhammad Mehran Arshad Khan ◽  
Muhammad Awais Javeed ◽  
Muhammad Umar Farooq ◽  
Adeel Akram ◽  
...  

Traditional approaches generally focus on the privacy of user’s identity in a smart IoT environment. Privacy of user’s behavior pattern is an important research issue to address smart technology towards improving user’s life. User’s behavior pattern consists of daily living activities in smart IoT environment. Sensor nodes directly interact with activities of user and forward sensing data to service provider server (SPS). While availing the services provided by a server, users may lose privacy since the untrusted devices have information about user’s behavior pattern and it may share data with adversary. In order to resolve this problem, we propose a multilevel privacy controlling scheme (MPCS) which is different from traditional approaches. MPCS is divided into two parts: (i) behavior pattern privacy degree (BehaviorPrivacyDeg), which works as follows: firstly, frequent pattern mining-based time-duration algorithm (FPMTA) finds the normal pattern of activity by adopting unsupervised learning. Secondly, patterns compact algorithm (PCA) is proposed to store and compact the mined pattern in each sensor device. Then, abnormal activity detection time-duration algorithm (AADTA) is used by current triggered sensors, in order to compare the current activity with normal activity by computing similarity among them; (ii) multilevel privacy design model: we have divided privacy of users into four levels in smart IoT environment, and by using these levels, the server can configure privacy level for users according to their concern. Multilevel privacy design model consists of privacy-level configuration protocol (PLCP) and activity design model. PLCP provides fine privacy controls to users while enabling users to set privacy level. In PLCP, we introduce level concern privacy algorithm (LCPA) and location privacy algorithm (LPA), so that adversary could not damage the data of user’s behavior pattern. Experiments are performed to evaluate the accuracy and feasibility of MPCS in both simulation and real-case studies. Results show that our proposed scheme can significantly protect the user’s behavior pattern by detecting abnormality in real time.


2021 ◽  
Vol 48 (4) ◽  
pp. 8-11
Author(s):  
Jefferson E. Simoes ◽  
Eduardo Ferreira ◽  
Daniel S. Menasch´e ◽  
Carlos A. V. Campos

Cryptocurrencies typically aim at preserving the privacy of their users. Different cryptocurrencies preserve privacy at various levels, some of them requiring users to rely on strategies to raise the privacy level to their needs. Among those strategies, we focus on two of them: merge avoidance and mixing services. Such strategies may be adopted on top of virtually any blockchain-based cryptocurrency. In this paper, we show that whereas optimal merge avoidance leads to an NP-hard optimization problem, incentive-compatible mixing services are subject to a certain class of impossibility results. Together, our results contribute to the body of work on fundamental limits of privacy mechanisms in blockchainbased cryptocurrencies.


Author(s):  
Waleed M.Ead, Et. al.

To build anonymization, the data anonymizer must determine the following three issues: Firstly, which data to be preserved? Secondly, which adversary background knowledge used to disclosure the anonymized data? Thirdly, The usage of the anonymized data? We have different anonymization techniques from the previous three-question according to different adversary background knowledge and information usage (information utility). In other words, different anonymization techniques lead to different information loss. In this paper, we propose a general framework for the utility-based anonymization to minimize the information loss in data published with a trade-off grantee of achieving the required privacy level.


2021 ◽  
Vol 1 (2 (109)) ◽  
pp. 24-34
Author(s):  
Alexander Zadereyko ◽  
Yuliia Prokop ◽  
Olena Trofymenko ◽  
Natalia Loginova ◽  
Оlha Plachinda

In order to identify ways used to collect data from user communication devices, an analysis of the interaction between DNS customers and the Internet name domain space has been carried out. It has been established that the communication device's DNS traffic is logged by the DNS servers of the provider, which poses a threat to the privacy of users. A comprehensive algorithm of protection against the collection of user data, consisting of two modules, has been developed and tested. The first module makes it possible to redirect the communication device's DNS traffic through DNS proxy servers with a predefined anonymity class based on the proposed multitest. To ensure a smooth and sustainable connection, the module automatically connects to a DNS proxy server that has minimal response time from those available in the compiled list. The second module blocks the acquisition of data collected by the developers of the software installed on the user's communication device, as well as by specialized Internet services owned by IT companies. The proposed algorithm makes it possible for users to choose their preferred level of privacy when communicating with the Internet space, thereby providing them with a choice of privacy level and, as a result, limiting the possibility of information manipulation over their owners. The DNS traffic of various fixed and mobile communication devices has been audited. The analysis of DNS traffic has enabled to identify and structure the DNS requests responsible for collecting data from users by the Internet services owned by IT companies. The identified DNS queries have been blocked; it has been experimentally confirmed that the performance of the basic and application software on communication devices was not compromised.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Albert Cheu ◽  
Adam Smith ◽  
Jonathan Ullman

Local differential privacy is a widely studied restriction on distributed algorithms that collect aggregates about sensitive user data, and is now deployed in several large systems. We initiate a systematic study of a fundamental limitation of locally differentially private protocols: they are highly vulnerable to adversarial manipulation. While any algorithm can be manipulated by adversaries who lie about their inputs, we show that any noninteractive locally differentially private protocol can be manipulated to a much greater extent---when the privacy level is high, or the domain size is large, a small fraction of users in the protocol can completely obscure the distribution of the honest users' input. We also construct protocols that are optimally robust to manipulation for a variety of common tasks in local differential privacy. Finally, we give simple experiments validating our  theoretical results, and demonstrating that protocols that are optimal without manipulation can have dramatically different levels of robustness to manipulation. Our results suggest caution when deploying local differential privacy and reinforce the importance of efficient cryptographic  techniques for the distributed emulation of centrally differentially private mechanisms.


2021 ◽  
Vol 3 (518) ◽  
pp. 23-28
Author(s):  
K. O. Kaverina ◽  
◽  
A. S. Sholom ◽  

The existence of tax havens is an integral phenomenon of the modern stage of the world economy development. They compete with each other on the basis of tax rates, privacy level, quality and speed of the service offered. Determining the impact of tax havens is now a rather pressing issue, deepening which has largely contributed to leakages of information, particularly the Panama Papers. However, the leaks of information from tax havens are not sufficiently covered in the research of both domestic and foreign scholars. The article is aimed at examining the impact of tax havens and the Panama Papers information leak on the world economy. The definition of tax havens and their scale in modern conditions of globalization economy is considered. In the systematization of scientific works the relatively different leaks of information from tax havens were compared and the largest of them was identified. The essence of the activities of Mossack Fonseca, the company that was the victim of the data leak, is disclosed. The dynamics of registered offshore companies within the framework of the Panama Papers leak are substantiated. With the help of a mathematical model, the authors computed a trend vector of movement of the number of companies, which are using the services of offshore zones. As a result of regression trend analysis, it was defined that the number of companies registered by Mossack Fonseca tends to grow (an average of 292 units annually). This indicates that, despite the publication of classified information, the popularity of tax havens continues to increase. The main intermediary countries, which are most popular in tax speculation, are provided. It is determined that the simplicity of formalization and registration of companies, the lack of control over subsidiaries of multinational business groups are key attributes of the activities of tax havens that contribute to their use to avoid paying income taxes and money laundering. The consequences caused by the activities of tax havens and information leaks from them have been formulated. The most important among them are: sanctions provisions and monitoring of offshore activities.


2021 ◽  
Vol 2021 (1) ◽  
pp. 64-84
Author(s):  
Ashish Dandekar ◽  
Debabrota Basu ◽  
Stéphane Bressan

AbstractThe calibration of noise for a privacy-preserving mechanism depends on the sensitivity of the query and the prescribed privacy level. A data steward must make the non-trivial choice of a privacy level that balances the requirements of users and the monetary constraints of the business entity.Firstly, we analyse roles of the sources of randomness, namely the explicit randomness induced by the noise distribution and the implicit randomness induced by the data-generation distribution, that are involved in the design of a privacy-preserving mechanism. The finer analysis enables us to provide stronger privacy guarantees with quantifiable risks. Thus, we propose privacy at risk that is a probabilistic calibration of privacy-preserving mechanisms. We provide a composition theorem that leverages privacy at risk. We instantiate the probabilistic calibration for the Laplace mechanism by providing analytical results.Secondly, we propose a cost model that bridges the gap between the privacy level and the compensation budget estimated by a GDPR compliant business entity. The convexity of the proposed cost model leads to a unique fine-tuning of privacy level that minimises the compensation budget. We show its effectiveness by illustrating a realistic scenario that avoids overestimation of the compensation budget by using privacy at risk for the Laplace mechanism. We quantitatively show that composition using the cost optimal privacy at risk provides stronger privacy guarantee than the classical advanced composition. Although the illustration is specific to the chosen cost model, it naturally extends to any convex cost model. We also provide realistic illustrations of how a data steward uses privacy at risk to balance the trade-off between utility and privacy.


Sign in / Sign up

Export Citation Format

Share Document