privacy problem
Recently Published Documents


TOTAL DOCUMENTS

56
(FIVE YEARS 18)

H-INDEX

4
(FIVE YEARS 1)

2022 ◽  
Vol 12 (2) ◽  
pp. 842
Author(s):  
Junxin Huang ◽  
Yuchuan Luo ◽  
Ming Xu ◽  
Bowen Hu ◽  
Jian Long

Online ride-hailing (ORH) services allow people to enjoy on-demand transportation services through their mobile devices in a short responding time. Despite the great convenience, users need to submit their location information to the ORH service provider, which may incur unexpected privacy problems. In this paper, we mainly study the privacy and utility of the ride-sharing system, which enables multiple riders to share one driver. To solve the privacy problem and reduce the ride-sharing detouring waste, we propose a privacy-preserving ride-sharing system named pShare. To hide users’ precise locations from the service provider, we apply a zone-based travel time estimation approach to privately compute over sensitive data while cloaking each rider’s location in a zone area. To compute the matching results along with the least-detouring route, the service provider first computes the shortest path for each eligible rider combination, then compares the additional traveling time (ATT) of all combinations, and finally selects the combination with minimum ATT. We designed a secure comparing protocol by utilizing the garbled circuit, which enables the ORH server to execute the protocol with a crypto server without privacy leakage. Moreover, we apply the data packing technique, by which multiple data can be packed as one to reduce the communication and computation overhead. Through the theoretical analysis and evaluation results, we prove that pShare is a practical ride-sharing scheme that can find out the sharing riders with minimum ATT in acceptable accuracy while protecting users’ privacy.


2021 ◽  
Author(s):  
Ngoc Hong Tran ◽  
Tri Nguyen ◽  
Quoc Binh Nguyen ◽  
Susanna Pirttikangas ◽  
M-Tahar Kechadi

This paper investigates the situation in which exists the unshared Internet in specific areas while users in there need instant advice from others nearby. Hence, a peer-to-peer network is necessary and established by connecting all neighbouring mobile devices so that they can exchange questions and recommendations. However, not all received recommendations are reliable as users may be unknown to each other. Therefore, the trustworthiness of advice is evaluated based on the advisor's reputation score. The reputation score is locally stored in the user’s mobile device. It is not completely guaranteed that the reputation score is trustful if its owner uses it for a wrong intention. In addition, another privacy problem is about honestly auditing the reputation score on the advising user by the questioning user. Therefore, this work proposes a security model, namely Crystal, for securely managing distributed reputation scores and for preserving user privacy. Crystal ensures that the reputation score can be verified, computed and audited in a secret way. Another significant point is that the device in the peer-to-peer network has limits in physical resources such as bandwidth, power and memory. For this issue, Crystal applies lightweight Elliptic Curve Cryptographic algorithms so that Crystal consumes less the physical resources of devices. The experimental results prove that our proposed model performance is promising.


2021 ◽  
Author(s):  
◽  
Kent Newman

<p>Documentary reality television is hugely successful. The genre, which includes shows like Police Ten 7, Coastwatch and Border Patrol, consistently outperforms other television formats and fills free-to-air television schedules. In these shows ride-along film crews and body-worn cameras record agencies as they go about their tasks. Often these agencies are public authorities and their tasks are statutory functions. The purpose of this paper is to examine the genre’s privacy implications. It concludes that the genre is systemically unlawful. It is unlawful because it breaches the privacy rights of involuntary participants. The paper considers the privacy implications by examining the genre against the shared features of the publication tort and the Privacy Broadcasting Standard. Both of these consider that it is a breach of privacy to broadcast material subject to a reasonable expectation of privacy, where that broadcast is highly offensive unless there is an applicable defence. While the material broadcast represents the work of agencies, it also represents the personal stories of everyday people going about their lives. Often the moments captured are significant life events and intimate moments for those people. By agreeing to contribute to the genre, agencies agree to broadcast these life events without the active involvement of the participants. Research has also found that this is often occurring without informed consent. While the focus of this paper is on the private law implications of the genre, it identifies that some public authorities’ involvement in the genre may also be ultra vires. The paper finishes by considering why, if the genre is systemically unlawful, people are not suing. It considers that general issues with access to civil justice and the powers of the Broadcasting Standards Authority stand in the way of potential complainants. It finishes by considering some solutions that could improve the situation.</p>


2021 ◽  
Author(s):  
◽  
Kent Newman

<p>Documentary reality television is hugely successful. The genre, which includes shows like Police Ten 7, Coastwatch and Border Patrol, consistently outperforms other television formats and fills free-to-air television schedules. In these shows ride-along film crews and body-worn cameras record agencies as they go about their tasks. Often these agencies are public authorities and their tasks are statutory functions. The purpose of this paper is to examine the genre’s privacy implications. It concludes that the genre is systemically unlawful. It is unlawful because it breaches the privacy rights of involuntary participants. The paper considers the privacy implications by examining the genre against the shared features of the publication tort and the Privacy Broadcasting Standard. Both of these consider that it is a breach of privacy to broadcast material subject to a reasonable expectation of privacy, where that broadcast is highly offensive unless there is an applicable defence. While the material broadcast represents the work of agencies, it also represents the personal stories of everyday people going about their lives. Often the moments captured are significant life events and intimate moments for those people. By agreeing to contribute to the genre, agencies agree to broadcast these life events without the active involvement of the participants. Research has also found that this is often occurring without informed consent. While the focus of this paper is on the private law implications of the genre, it identifies that some public authorities’ involvement in the genre may also be ultra vires. The paper finishes by considering why, if the genre is systemically unlawful, people are not suing. It considers that general issues with access to civil justice and the powers of the Broadcasting Standards Authority stand in the way of potential complainants. It finishes by considering some solutions that could improve the situation.</p>


2021 ◽  
Vol 11 (23) ◽  
pp. 11529
Author(s):  
Tai-Lin Chin ◽  
Wan-Ni Shih

With the advent of cloud computing, the low-cost and high-capacity cloud storages have attracted people to move their data from local computers to the remote facilities. People can access and share their data with others at anytime, from anywhere. However, the convenience of cloud storages also comes with new problems and challenges. This paper investigates the problem of secure document search on the cloud. Traditional search schemes use a long index for each document to facilitate keyword search in a large dataset, but long indexes can reduce the search efficiency and waste space. Another concern to prevent people from using cloud storages is the security and privacy problem. Since cloud services are usually run by third party providers, data owners desire to avoid the leakage of their confidential information, and data users desire to protect their privacy when performing search. A trivial solution is to encrypt the data before outsourcing the data to the cloud. However, the encryption could make the search difficult by plain keywords. This paper proposes a secure multi-keyword search scheme with condensed index for encrypted cloud documents. The proposed scheme resolves the issue of long document index and the problem of searching documents over encrypted data, simultaneously. Extended simulations are conducted to show the improvements in terms of time and space efficiency for cloud data search.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7806
Author(s):  
Jinmyeong Shin ◽  
Seok-Hwan Choi ◽  
Yoon-Ho Choi

As the amount of data collected and analyzed by machine learning technology increases, data that can identify individuals is also being collected in large quantities. In particular, as deep learning technology—which requires a large amount of analysis data—is activated in various service fields, the possibility of exposing sensitive information of users increases, and the user privacy problem is growing more than ever. As a solution to this user’s data privacy problem, homomorphic encryption technology, which is an encryption technology that supports arithmetic operations using encrypted data, has been applied to various field including finance and health care in recent years. If so, is it possible to use the deep learning service while preserving the data privacy of users by using the data to which homomorphic encryption is applied? In this paper, we propose three attack methods to infringe user’s data privacy by exploiting possible security vulnerabilities in the process of using homomorphic encryption-based deep learning services for the first time. To specify and verify the feasibility of exploiting possible security vulnerabilities, we propose three attacks: (1) an adversarial attack exploiting communication link between client and trusted party; (2) a reconstruction attack using the paired input and output data; and (3) a membership inference attack by malicious insider. In addition, we describe real-world exploit scenarios for financial and medical services. From the experimental evaluation results, we show that the adversarial example and reconstruction attacks are a practical threat to homomorphic encryption-based deep learning models. The adversarial attack decreased average classification accuracy from 0.927 to 0.043, and the reconstruction attack showed average reclassification accuracy of 0.888, respectively.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Daniele Romanini ◽  
Sune Lehmann ◽  
Mikko Kivelä

AbstractThe ability to share social network data at the level of individual connections is beneficial to science: not only for reproducing results, but also for researchers who may wish to use it for purposes not foreseen by the data releaser. Sharing such data, however, can lead to serious privacy issues, because individuals could be re-identified, not only based on possible nodes’ attributes, but also from the structure of the network around them. The risk associated with re-identification can be measured and it is more serious in some networks than in others. While various optimization algorithms have been proposed to anonymize networks, there is still only a limited theoretical understanding of which network features are important for the privacy problem. Using network models and real data, we show that the average degree of networks is a crucial parameter for the severity of re-identification risk from nodes’ neighborhoods. Dense networks are more at risk, and, apart from a small band of average degree values, either almost all nodes are uniquely re-identifiable or they are all safe. Our results allow researchers to assess the privacy risk based on a small number of network statistics which are available even before the data is collected. As a rule-of-thumb, the privacy risks are high if the average degree is above 10. Guided by these results, we explore sampling of edges as a strategy to mitigate the re-identification risk of nodes. This approach can be implemented during the data collection phase, and its effect on various network measures can be estimated and corrected using sampling theory. The new understanding of the uniqueness of neighborhoods in networks presented in this work can support the development of privacy-aware ways of designing network data collection procedures, anonymization methods, and sharing network data.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Anokhy Desai

Americans have felt the impacts of data breaches annually for over a decade. In the past few years, the impact and number of those breaches have increased, compromising millions of Americans’ informational privacy. This Article examines the privacy protections available to Americans and the issues arising from the lack of regulations that specifically protect data privacy. Section I of this Article offers an overview of privacy in American legal history and case law, global regulatory models, and some notable privacy regulations. Section II explores where those regulatory models and the consumer experience are lacking. Section III takes lessons learned from existing privacy regulations and proposes a suggested mitigation for the national data privacy problem. Finally, Section IV provides concluding thoughts.


Author(s):  
Imen Khabou ◽  
Mohsen Rouached ◽  
Alexandre Viejo ◽  
David Sánchez

This article describes how by using web service composition to model different business processes is a usual tendency in the industry. More specifically, web service composition enables to separate a certain process in different activities that must be executed following a certain order. Each activity has its own set of inputs and outputs and is executed by a certain web service hosted by a service provider which can be completely independent. Among all the applications in which web service composition may be applied, this article focuses on a cloud-based scenario in which a business wishes to outsource the execution of a certain complex service in exchange for some economical compensation. It is for this reason, among the different composition approaches that exist in the literature, this article focuses on the orchestrated one, in which a broker coordinates the composition. One of the main issues of orchestrated systems is the fact that the broker receives and learns all the input data needed to perform the requested complex service. This behavior may represent a serious privacy problem depending on the nature of the business process to be executed. In this article, a new privacy-preserving orchestrated Web service composition system based on a symmetric searchable encryption primitive is proposed. The main target of this new scheme is to protect the privacy of the business that wish to outsource their operations using a cloud-based solution in which the broker is honest but curious, this is, this entity tries to analyze data and message flows in order to learn all the possible sensitive information from the rest of participants in the system.


Sign in / Sign up

Export Citation Format

Share Document