scholarly journals Preventative Nudges: Introducing Risk Cues for Supporting Online Self-Disclosure Decisions

Information ◽  
2020 ◽  
Vol 11 (8) ◽  
pp. 399 ◽  
Author(s):  
Nicolás E. Díaz Ferreyra ◽  
Tobias Kroll ◽  
Esma Aïmeur ◽  
Stefan Stieglitz ◽  
Maritta Heisel

Like in the real world, perceptions of risk can influence the behavior and decisions that people make in online platforms. Users of Social Network Sites (SNSs) like Facebook make continuous decisions about their privacy since these are spaces designed to share private information with large and diverse audiences. In particular, deciding whether or not to disclose such information will depend largely on each individual’s ability to assess the corresponding privacy risks. However, SNSs often lack awareness instruments that inform users about the consequences of unrestrained self-disclosure practices. Such an absence of risk information can lead to poor assessments and, consequently, undermine users’ privacy behavior. This work elaborates on the use of risk scenarios as a strategy for promoting safer privacy decisions in SNSs. In particular, we investigate, through an online survey, the effects of communicating those risks associated with online self-disclosure. Furthermore, we analyze the users’ perceived severity of privacy threats and its importance for the definition of personalized risk awareness mechanisms. Based on our findings, we introduce the design of preventative nudges as an approach for providing individual privacy support and guidance in SNSs.

Author(s):  
Lev Velykoivanenko ◽  
Kavous Salehzadeh Niksirat ◽  
Noé Zufferey ◽  
Mathias Humbert ◽  
Kévin Huguenin ◽  
...  

Fitness trackers are increasingly popular. The data they collect provides substantial benefits to their users, but it also creates privacy risks. In this work, we investigate how fitness-tracker users perceive the utility of the features they provide and the associated privacy-inference risks. We conduct a longitudinal study composed of a four-month period of fitness-tracker use (N = 227), followed by an online survey (N = 227) and interviews (N = 19). We assess the users' knowledge of concrete privacy threats that fitness-tracker users are exposed to (as demonstrated by previous work), possible privacy-preserving actions users can take, and perceptions of utility of the features provided by the fitness trackers. We study the potential for data minimization and the users' mental models of how the fitness tracking ecosystem works. Our findings show that the participants are aware that some types of information might be inferred from the data collected by the fitness trackers. For instance, the participants correctly guessed that sexual activity could be inferred from heart-rate data. However, the participants did not realize that also the non-physiological information could be inferred from the data. Our findings demonstrate a high potential for data minimization, either by processing data locally or by decreasing the temporal granularity of the data sent to the service provider. Furthermore, we identify the participants' lack of understanding and common misconceptions about how the Fitbit ecosystem works.


2020 ◽  
Author(s):  
Imdad Ullah ◽  
Roksana Boreli ◽  
Salil S. Kanhere

Targeted advertising has transformed the marketing trend for any business by creating new opportunities for advertisers to reach prospective customers by delivering them personalised ads using an infrastructure of a variety of intermediary entities and technologies. The advertising and analytics companies collect, aggregate, process and trade a rich amount of user's personal data, which has prompted serious privacy concerns among individuals and organisations. This article presents a detailed survey of privacy risks including the information flow between advertising platform and ad/analytics networks, the profiling process, the advertising sources and criteria, the measurement analysis of targeted advertising based on user's interests and profiling context and ads delivery process in both in-app and in-browser targeted ads. We provide detailed discussion of challenges in preserving user privacy that includes privacy threats posed by the advertising and analytics companies, how private information is extracted and exchanged among various advertising entities, privacy threats from third-party tracking, re-identification of private information and associated privacy risks, in addition to, overview data and tracking sharing technologies. Following, we present various techniques for preserving user privacy and a comprehensive analysis of various proposals founded on those techniques and compare them based on the underlying architectures, the privacy mechanisms and the deployment scenarios. Finally we discuss some potential research challenges and open research issues.<br>


2020 ◽  
Author(s):  
Imdad Ullah ◽  
Roksana Boreli ◽  
Salil S. Kanhere

Targeted advertising has transformed the marketing trend for any business by creating new opportunities for advertisers to reach prospective customers by delivering them personalised ads using an infrastructure of a variety of intermediary entities and technologies. The advertising and analytics companies collect, aggregate, process and trade a rich amount of user's personal data, which has prompted serious privacy concerns among individuals and organisations. This article presents a detailed survey of privacy risks including the information flow between advertising platform and ad/analytics networks, the profiling process, the advertising sources and criteria, the measurement analysis of targeted advertising based on user's interests and profiling context and ads delivery process in both in-app and in-browser targeted ads. We provide detailed discussion of challenges in preserving user privacy that includes privacy threats posed by the advertising and analytics companies, how private information is extracted and exchanged among various advertising entities, privacy threats from third-party tracking, re-identification of private information and associated privacy risks, in addition to, overview data and tracking sharing technologies. Following, we present various techniques for preserving user privacy and a comprehensive analysis of various proposals founded on those techniques and compare them based on the underlying architectures, the privacy mechanisms and the deployment scenarios. Finally we discuss some potential research challenges and open research issues.<br>


2019 ◽  
Vol 28 (2) ◽  
pp. 183-197 ◽  
Author(s):  
Paola Mavriki ◽  
Maria Karyda

Purpose User profiling with big data raises significant issues regarding privacy. Privacy studies typically focus on individual privacy; however, in the era of big data analytics, users are also targeted as members of specific groups, thus challenging their collective privacy with unidentified implications. Overall, this paper aims to argue that in the age of big data, there is a need to consider the collective aspects of privacy as well and to develop new ways of calculating privacy risks and identify privacy threats that emerge. Design/methodology/approach Focusing on a collective level, the authors conducted an extensive literature review related to information privacy and concepts of social identity. They also examined numerous automated data-driven profiling techniques analyzing at the same time the involved privacy issues for groups. Findings This paper identifies privacy threats for collective entities that stem from data-driven profiling, and it argues that privacy-preserving mechanisms are required to protect the privacy interests of groups as entities, independently of the interests of their individual members. Moreover, this paper concludes that collective privacy threats may be different from threats for individuals when they are not members of a group. Originality/value Although research evidence indicates that in the age of big data privacy as a collective issue is becoming increasingly important, the pluralist character of privacy has not yet been adequately explored. This paper contributes to filling this gap and provides new insights with regard to threats for group privacy and their impact on collective entities and society.


2020 ◽  
Author(s):  
Imdad Ullah ◽  
Roksana Boreli ◽  
Salil S. Kanhere

Targeted advertising has transformed the marketing trend for any business by creating new opportunities for advertisers to reach prospective customers by delivering them personalised ads using an infrastructure of a variety of intermediary entities and technologies. The advertising and analytics companies collect, aggregate, process and trade a rich amount of user's personal data, which has prompted serious privacy concerns among individuals and organisations. This article presents a detailed survey of privacy risks including the information flow between advertising platform and ad/analytics networks, the profiling process, the advertising sources and criteria, the measurement analysis of targeted advertising based on user's interests and profiling context and ads delivery process in both in-app and in-browser targeted ads. We provide detailed discussion of challenges in preserving user privacy that includes privacy threats posed by the advertising and analytics companies, how private information is extracted and exchanged among various advertising entities, privacy threats from third-party tracking, re-identification of private information and associated privacy risks, in addition to, overview data and tracking sharing technologies. Following, we present various techniques for preserving user privacy and a comprehensive analysis of various proposals founded on those techniques and compare them based on the underlying architectures, the privacy mechanisms and the deployment scenarios. Finally we discuss some potential research challenges and open research issues.<br>


2018 ◽  
Vol 13 (10) ◽  
pp. 96
Author(s):  
Basmah Emad ALQadheeb ◽  
Othman Ibraheem Alsalloum

Millions of people worldwide visit social network sites (SNSs) daily, such as Facebook, Twitter, and Snapchat. We examined a model based on the privacy calculus theory to better understand and determine what motivates users to disclose personal information on SNSs in Saudi Arabia. A total of 550 respondents participated in an online survey. The analysis results indicate that Saudi SNS users are primarily motivated by the convenience of maintaining and developing new relationships, self-presentation, and platform enjoyment. The results also indicate that privacy risks are a critical barrier to information disclosure. However, users become less concerned about privacy risks&mdash;and are thus more likely to disclose personal information&mdash;if they trust other SNS members and the service provider. Trust in the service provider increases if privacy control options are provided. In addition, the results show that gender influences the motivations to self-disclose personal information. Based on the analysis results, recommendations for service providers are provided.


2020 ◽  
Author(s):  
Weihua Yang ◽  
Bo Zheng ◽  
Maonian Wu ◽  
Shaojun Zhu ◽  
Hongxia Zhou ◽  
...  

BACKGROUND Artificial intelligence (AI) is widely applied in the medical field, especially in ophthalmology. In the development of ophthalmic artificial intelligence, some problems worthy of attention have gradually emerged, among which the ophthalmic AI-related recognition issues are particularly prominent. That is to say, currently, there is a lack of research into people's familiarity with and their attitudes toward ophthalmic AI. OBJECTIVE This survey aims to assess medical workers’ and other professional technicians’ familiarity with AI, as well as their attitudes toward and concerns of ophthalmic AI. METHODS An electronic questionnaire was designed through the Questionnaire Star APP, an online survey software and questionnaire tool, and was sent to relevant professional workers through Wechat, China’s version of Facebook or WhatsApp. The participation was based on a voluntary and anonymous principle. The questionnaire mainly consisted of four parts, namely the participant’s background, the participant's basic understanding of AI, the participant's attitude toward AI, and the participant's concerns about AI. A total of 562 participants were counted, with 562 valid questionnaires returned. The results of the questionnaires are displayed in an Excel 2003 form. RESULTS A total of 562 professional workers completed the questionnaire, of whom 291 were medical workers and 271 were other professional technicians. About 37.9% of the participants understood AI, and 31.67% understood ophthalmic AI. The percentages of people who understood ophthalmic AI among medical workers and other professional technicians were about 42.61% and 15.6%, respectively. About 66.01% of the participants thought that ophthalmic AI would partly replace doctors, with about 59.07% still having a relatively high acceptance level of ophthalmic AI. Meanwhile, among those with ophthalmic AI application experiences (30.6%), respectively about 84.25% of medical professionals and 73.33% of other professional technicians held a full acceptance attitude toward ophthalmic AI. The participants expressed concerns that ophthalmic AI might bring about issues such as the unclear definition of medical responsibilities, the difficulty of ensuring service quality, and the medical ethics risks. And among the medical workers and other professional technicians who understood ophthalmic AI, 98.39%, and 95.24%, respectively, said that there was a need to increase the study of medical ethics issues in the ophthalmic AI field. CONCLUSIONS Analysis of the questionnaire results shows that the medical workers have a higher understanding level of ophthalmic AI than other professional technicians, making it necessary to popularize ophthalmic AI education among other professional technicians. Most of the participants did not have any experience in ophthalmic AI, but generally had a relatively high acceptance level of ophthalmic AI, believing that doctors would partly be replaced by it and that there was a need to strengthen research into medical ethics issues of the field.


2010 ◽  
Vol 25 (2) ◽  
pp. 109-125 ◽  
Author(s):  
Hanna Krasnova ◽  
Sarah Spiekermann ◽  
Ksenia Koroleva ◽  
Thomas Hildebrand

On online social networks such as Facebook, massive self-disclosure by users has attracted the attention of Industry players and policymakers worldwide. Despite the Impressive scope of this phenomenon, very little Is understood about what motivates users to disclose personal Information. Integrating focus group results Into a theoretical privacy calculus framework, we develop and empirically test a Structural Equation Model of self-disclosure with 259 subjects. We find that users are primarily motivated to disclose Information because of the convenience of maintaining and developing relationships and platform enjoyment. Countervailing these benefits, privacy risks represent a critical barrier to information disclosure. However, users’ perception of risk can be mitigated by their trust in the network provider and availability of control options. Based on these findings, we offer recommendations for network providers.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Shamima Yesmin ◽  
S.M. Zabed Ahmed

Purpose The purpose of this paper is to investigate Library and Information Science (LIS) students’ understanding of infodemic and related terminologies and their ability to categorize COVID-19-related problematic information types using examples from social media platforms. Design/methodology/approach The participants of this study were LIS students from a public-funded university located at the south coast of Bangladesh. An online survey was conducted which, in addition to demographic and study information, asked students to identify the correct definition of infodemic and related terminologies and to categorize the COVID-related problematic social media posts based on their inherent problem characteristics. The correct answer for each definition and task question was assigned a score of “1”, whereas the wrong answer was coded as “0”. The percentages of correctness score for total and each category of definition and task-specific questions were computed. The independent sample t-test and ANOVA were run to examine the differences in total and category-specific scores between student groups. Findings The findings revealed that students’ knowledge concerning the definition of infodemic and related terminologies and the categorization of COVID-19-related problematic social media posts was poor. There was no significant difference in correctness scores between student groups in terms of gender, age and study levels. Originality/value To the best of the authors’ knowledge, this is the first time an effort was made to understand LIS students’ recognition and classification of problematic information. The findings can assist LIS departments in revising and improving the existing information literacy curriculum for students.


Sign in / Sign up

Export Citation Format

Share Document