privacy risks
Recently Published Documents


TOTAL DOCUMENTS

378
(FIVE YEARS 215)

H-INDEX

27
(FIVE YEARS 7)

2022 ◽  
Vol 22 (1) ◽  
pp. 1-22
Author(s):  
David Major ◽  
Danny Yuxing Huang ◽  
Marshini Chetty ◽  
Nick Feamster

Many Internet of Things devices have voice user interfaces. One of the most popular voice user interfaces is Amazon’s Alexa, which supports more than 50,000 third-party applications (“skills”). We study how Alexa’s integration of these skills may confuse users. Our survey of 237 participants found that users do not understand that skills are often operated by third parties, that they often confuse third-party skills with native Alexa functions, and that they are unaware of the functions that the native Alexa system supports. Surprisingly, users who interact with Alexa more frequently are more likely to conclude that a third-party skill is a native Alexa function. The potential for misunderstanding creates new security and privacy risks: attackers can develop third-party skills that operate without users’ knowledge or masquerade as native Alexa functions. To mitigate this threat, we make design recommendations to help users better distinguish native functionality and third-party skills, including audio and visual indicators of native and third-party contexts, as well as a consistent design standard to help users learn what functions are and are not possible on Alexa.


2023 ◽  
Vol 55 (1) ◽  
pp. 1-39
Author(s):  
Kinza Sarwar ◽  
Sira Yongchareon ◽  
Jian Yu ◽  
Saeed Ur Rehman

Despite the rapid growth and advancement in the Internet of Things (IoT ), there are critical challenges that need to be addressed before the full adoption of the IoT. Data privacy is one of the hurdles towards the adoption of IoT as there might be potential misuse of users’ data and their identity in IoT applications. Several researchers have proposed different approaches to reduce privacy risks. However, most of the existing solutions still suffer from various drawbacks, such as huge bandwidth utilization and network latency, heavyweight cryptosystems, and policies that are applied on sensor devices and in the cloud. To address these issues, fog computing has been introduced for IoT network edges providing low latency, computation, and storage services. In this survey, we comprehensively review and classify privacy requirements for an in-depth understanding of privacy implications in IoT applications. Based on the classification, we highlight ongoing research efforts and limitations of the existing privacy-preservation techniques and map the existing IoT schemes with Fog-enabled IoT schemes to elaborate on the benefits and improvements that Fog-enabled IoT can bring to preserve data privacy in IoT applications. Lastly, we enumerate key research challenges and point out future research directions.


Science ◽  
2022 ◽  
Vol 375 (6577) ◽  
pp. 129-129
Author(s):  
Matthew Hutson

Software that identifies unique styles poses privacy risks


2022 ◽  
Author(s):  
Yongfeng Huang ◽  
Chuhan Wu ◽  
Fangzhao Wu ◽  
Lingjuan Lyu ◽  
Tao Qi ◽  
...  

Abstract Graph neural network (GNN) is effective in modeling high-order interactions and has been widely used in various personalized applications such as recommendation. However, mainstream personalization methods rely on centralized GNN learning on global graphs, which have considerable privacy risks due to the privacy-sensitive nature of user data. Here, we present a federated GNN framework named FedGNN for both effective and privacy-preserving personalization. Through a privacy-preserving model update method, we can collaboratively train GNN models based on decentralized graphs inferred from local data. To further exploit graph information beyond local interactions, we introduce a privacy-preserving graph expansion protocol to incorporate high-order information under privacy protection. Experimental results on six datasets for personalization in different scenarios show that FedGNN achieves 4.0%~9.6% lower errors than the state-of-the-art federated personalization methods under good privacy protection. FedGNN provides a novel direction to mining decentralized graph data in a privacy-preserving manner for responsible and intelligent personalization.


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Stefano Bennati ◽  
Aleksandra Kovacevic

AbstractMobility patterns of vehicles and people provide powerful data sources for location-based services such as fleet optimization and traffic flow analysis. Location-based service providers must balance the value they extract from trajectory data with protecting the privacy of the individuals behind those trajectories. Reaching this goal requires measuring accurately the values of utility and privacy. Current measurement approaches assume adversaries with perfect knowledge, thus overestimate the privacy risk. To address this issue, we introduce a model of an adversary with imperfect knowledge about the target. The model is based on equivalence areas, spatio-temporal regions with a semantic meaning, e.g. the target’s home, whose size and accuracy determine the skill of the adversary. We then derive the standard privacy metrics of k-anonymity, l-diversity and t-closeness from the definition of equivalence areas. These metrics can be computed on any dataset, irrespective of whether and what kind of anonymization has been applied to it. This work is of high relevance to all service providers acting as processors of trajectory data who want to manage privacy risks and optimize the privacy vs. utility trade-off of their services.


2022 ◽  
pp. 103624
Author(s):  
Tommaso Crepax ◽  
Victor Muntés-Mulero ◽  
Jabier Martinez ◽  
Alejandra Ruiz

2022 ◽  
Vol 196 ◽  
pp. 191-198
Author(s):  
Sitesh Mohanty, Kathryn Cormican ◽  
Chandrasekhar Dhanapathi

Author(s):  
Lev Velykoivanenko ◽  
Kavous Salehzadeh Niksirat ◽  
Noé Zufferey ◽  
Mathias Humbert ◽  
Kévin Huguenin ◽  
...  

Fitness trackers are increasingly popular. The data they collect provides substantial benefits to their users, but it also creates privacy risks. In this work, we investigate how fitness-tracker users perceive the utility of the features they provide and the associated privacy-inference risks. We conduct a longitudinal study composed of a four-month period of fitness-tracker use (N = 227), followed by an online survey (N = 227) and interviews (N = 19). We assess the users' knowledge of concrete privacy threats that fitness-tracker users are exposed to (as demonstrated by previous work), possible privacy-preserving actions users can take, and perceptions of utility of the features provided by the fitness trackers. We study the potential for data minimization and the users' mental models of how the fitness tracking ecosystem works. Our findings show that the participants are aware that some types of information might be inferred from the data collected by the fitness trackers. For instance, the participants correctly guessed that sexual activity could be inferred from heart-rate data. However, the participants did not realize that also the non-physiological information could be inferred from the data. Our findings demonstrate a high potential for data minimization, either by processing data locally or by decreasing the temporal granularity of the data sent to the service provider. Furthermore, we identify the participants' lack of understanding and common misconceptions about how the Fitbit ecosystem works.


Sign in / Sign up

Export Citation Format

Share Document