User Privacy Concerns and Preferences in Smart Buildings

Author(s):  
Scott Harper ◽  
Maryam Mehrnezhad ◽  
John C. Mace
Author(s):  
Eko Wahyu Tyas Darmaningrat ◽  
Hanim Maria Astuti ◽  
Fadhila Alfi

Background: Teenagers in Indonesia have an open nature and satisfy their desire to exist by uploading photos or videos and writing posts on Instagram. The habit of uploading photos, videos, or writings containing their personal information can be dangerous and potentially cause user privacy problems. Several criminal cases caused by information misuse have occurred in Indonesia.Objective: This paper investigates information privacy concerns among Instagram users in Indonesia, more specifically amongst college students, the largest user group of Instagram in Indonesia.Methods: This study referred to the Internet Users' Information Privacy Concerns (IUIPC) method by collecting data through the distribution of online questionnaires and analyzed the data by using Structural Equation Modelling (SEM).Results: The research finding showed that even though students are mindful of the potential danger of information misuse in Instagram, it does not affect their intention to use Instagram. Other factors that influence Indonesian college students' trust are Instagram's reputation, the number of users who use Instagram, the ease of using Instagram, the skills and knowledge of Indonesian students about Instagram, and the privacy settings that Instagram has.Conclusion: The awareness and concern of Indonesian college students for information privacy will significantly influence the increased risk awareness of information privacy. However, the increase in risk awareness does not directly affect Indonesian college students' behavior to post their private information on Instagram.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Vedran Sekara ◽  
Laura Alessandretti ◽  
Enys Mones ◽  
Håkan Jonsson

AbstractLarge-scale collection of human behavioural data by companies raises serious privacy concerns. We show that behaviour captured in the form of application usage data collected from smartphones is highly unique even in large datasets encompassing millions of individuals. This makes behaviour-based re-identification of users across datasets possible. We study 12 months of data from 3.5 million people from 33 countries and show that although four apps are enough to uniquely re-identify 91.2% of individuals using a simple strategy based on public information, there are considerable seasonal and cultural variations in re-identification rates. We find that people have more unique app-fingerprints during summer months making it easier to re-identify them. Further, we find significant variations in uniqueness across countries, and reveal that American users are the easiest to re-identify, while Finns have the least unique app-fingerprints. We show that differences across countries can largely be explained by two characteristics of the country specific app-ecosystems: the popularity distribution and the size of app-fingerprints. Our work highlights problems with current policies intended to protect user privacy and emphasizes that policies cannot directly be ported between countries. We anticipate this will nuance the discussion around re-identifiability in digital datasets and improve digital privacy.


2021 ◽  
Author(s):  
Daria Ilkina

This thesis investigates the privacy risks that m-learning app users face by identifying the personal information that m-learning apps collect from their users, and the privacy policies of these apps. It reveals that most of the m-learning applications have similar privacy policies, which seem to protect the interest of the providers rather than the users. The Privacy by Design framework is reviewed to determine whether it can help the developers address user privacy practices. The results from the sample of 260 participants suggest that users are less concerned with the collection of personal information that is non-identifiable. The survey also revealed that the users are more concerned when an app shares their personal information with third parties for commercial purposes than when it is shared with the government.


2021 ◽  
Author(s):  
John S. Seberger ◽  
Sameer Patil

BACKGROUND Smartphone-based apps designed and deployed to mitigate the ongoing COVID-19 pandemic are poised to become an infrastructure for post-pandemic public health surveillance. Yet people frequently identify deep-seated privacy concerns about such apps, invoking rationalizations such as contributing to ‘the greater good’ to justify their privacy-related discomfort. We adopt a future-oriented lens and consider participant perceptions of the potential routinization of such apps as a general public health surveillance infrastructure. This work focuses on the need to temper the surveillant achievement of public health with consideration for potential colonization of public health by the exploitative mechanisms of surveillance capitalism. OBJECTIVE This study develops an understanding of people’s perceptions of the potential routinization of apps as an infrastructure for public health surveillance after the COVID-19 pandemic has ended. METHODS We conducted scenario-based interviews (n = 19) with adults in the United States in order to understand how people perceive the short- and long-term privacy concerns associated with a fictional smart-thermometer app deployed to mitigate the ‘outbreak of a contagious disease.’ The scenario indicated that the app would continue functioning ‘after the disease outbreak as dissipated.’ We analyzed participant interviews using reflexive thematic analysis (TA). RESULTS Participants contextualized their perceptions of the app in a core trade-off between public health and personal privacy. They further evidenced the widespread expectation that data collected through health-surveillant apps would be shared with unknown third parties for financial gain. This expectation suggests a perceived alignment between health surveillant technologies and the broader economics of surveillance capitalism. Because of such expectations, participants routinely rationalized the use of the fictional app, which they viewed as always already privacy-invasive, by invoking ‘the greater good.’ We uncover that ‘the greater good’ is multi-faceted and self-contradictory, evidencing participants’ worry that health surveillance apps will contribute to an expansion of exploitative forms of surveillance. CONCLUSIONS While apps may be an effective means of pandemic-mitigation and preparedness, such apps are not exclusively beneficial in their outcomes. The potential routinization of apps as an infrastructure of general public health surveillance fosters end-user exploitation. Through its alignment with surveillance capitalism, such exploitation potentially erodes patient trust in the health care systems and providers that care for them. The inroads to such exploitation are present in participants’ manifestation of digital resignation, hyperbolic scaling, expectation of an infrastructure that works ‘too well,’ and generalized privacy fatalism.


2021 ◽  
Vol 13 (16) ◽  
pp. 3127
Author(s):  
Ramtin Rabiee ◽  
Johannes Karlsson

Knowledge about the indoor occupancy is one of the important sources of information to design smart buildings. In some applications, the number of occupants in each zone is required. However, there are many challenges such as user privacy, communication limit, and sensor’s computational capability in development of the occupancy monitoring systems. In this work, a people flow counting algorithm has been developed which uses low-resolution thermal images to avoid any privacy concern. Moreover, the proposed scheme is designed to be applicable for wireless sensor networks based on the internet-of-things platform. Simple low-complexity image processing techniques are considered to detect possible objects in sensor’s field of view. To tackle the noisy detection measurements, a multi-Bernoulli target tracking approach is used to track and finally to count the number of people passing the area of interest in different directions. Based on the sensor node’s processing capability, one can consider either a centralized or a full in situ people flow counting system. By performing the tracking part either in sensor node or in a fusion center, there would be a trade off between the computational complexity and the transmission rate. Therefore, the developed system can be performed in a wide range of applications with different processing and transmission constraints. The accuracy and robustness of the proposed method are also evaluated with real measurements from different conducted trials and open-source dataset.


Cyber Crime ◽  
2013 ◽  
pp. 534-556
Author(s):  
Amr Ali Eldin

Despite the expected benefits behind context-awareness and the need for developing more and more context-aware applications, we enunciate that privacy represents a major challenge for the success and widespread adoption of these services. This is due to the collection of huge amount of users’ contextual information, which would highly threaten their privacy concerns. Controlling users’ information collection represents a logical way to let users get more acquainted with these context-aware services. Additionally, this control requires users to be able to make consent decisions which face a high degree of uncertainty due to the nature of this environment and the lack of experience from the user side with information collectors’ privacy policies. Therefore, intelligent techniques are required in order to deal with this uncertainty. In this chapter, the auhtors propose a consent decision-making mechanism, ShEM, which allows users to exert automatic and manual control over their private information. An enhanced fuzzy logic approach was developed for the automatic decision making process. The proposed mechanism has been prototyped and integrated in a UMTS location-based services testbed on a university campus. Users have experienced the services in real time. A survey of users’ responses on the privacy functionality has been carried out and analyzed as well. Users’ response on the privacy functionality was positive. Additionally, results obtained showed that a combination of both manual and automatic privacy control modes in one approach is more likely to be accepted than only a complete automatic or a complete manual privacy control.


2018 ◽  
Vol 29 (3) ◽  
pp. 698-722 ◽  
Author(s):  
Esther Gal-Or ◽  
Ronen Gal-Or ◽  
Nabita Penmetsa

2016 ◽  
Vol 14 (4) ◽  
pp. 364-382 ◽  
Author(s):  
Aqdas Malik ◽  
Kari Hiekkanen ◽  
Amandeep Dhir ◽  
Marko Nieminen

Purpose The popularity of Facebook photo sharing has not only seen a surge in the number of photos shared but also has raised various issues concerning user privacy and self-disclosure. Recent literature has documented the increasing interest of the research community in understanding various privacy issues concerning self-disclosures on Facebook. However, little is known about how different privacy issues, trust and activity influence users’ intentions to share photos on Facebook. To bridge this gap, a research model was developed and tested to better understand the impact of privacy concerns, privacy awareness and privacy-seeking on trust and actual photo sharing activity and subsequently on photo sharing intentions. This study aims to examine the consequences of various facets of privacy associated with photo sharing activity on Facebook. Design/methodology/approach A cross-sectional data from 378 respondents were collected and analysed using partial least squares modelling. Findings The results revealed a significant relationship between various aspects of privacy, including awareness and protective behaviour, with trust and activity. Furthermore, trust and users’ photo sharing activity significantly impact photo sharing intentions on Facebook. Originality/value This study contributes new knowledge concerning various privacy issues and their impact on photo sharing activity and trust. The study also proposes implications that are highly relevant for social networking sites, media agencies and organisations involved in safeguarding the privacy of online users.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4383 ◽  
Author(s):  
Hongchen Wu ◽  
Mingyang Li ◽  
Huaxiang Zhang

Privacy intrusion has become a major bottleneck for current trust-aware social sensing, since online social media allows anybody to largely disclose their personal information due to the proliferation of the Internet of Things (IoT). State-of-the-art social sensing still suffers from severe privacy threats since it collects users’ personal data and disclosure behaviors, which could raise user privacy concerns due to data integration for personalization. In this paper, we propose a trust-aware model, called the User and Item Similarity Model with Trust in Diverse Kinds (UISTD), to enhance the personalization of social sensing while reducing users’ privacy concerns. UISTD utilizes user-to-user similarities and item-to-item similarities to generate multiple kinds of personalized items with common tags. UISTD also applies a modified k-means clustering algorithm to select the core users among trust relationships, and the core users’ preferences and disclosure behaviors will be regarded as the predicted disclosure pattern. The experimental results on three real-world data sets demonstrate that target users are more likely to: (1) follow the core users’ interests on diverse kinds of items and disclosure behaviors, thereby outperforming the compared methods; and (2) disclose more information with lower intrusion awareness and privacy concern.


Sign in / Sign up

Export Citation Format

Share Document