scholarly journals Between protection and disclosure: applying the privacy calculus to investigate the intended use of privacy-protecting tools and self-disclosure on different websites

2021 ◽  
Vol 10 (3) ◽  
pp. 283-306
Author(s):  
Yannic Meier ◽  
Johanna Schäwel ◽  
Nicole C. Krämer

Using privacy-protecting tools and reducing self-disclosure can decrease the likelihood of experiencing privacy violations. Whereas previous studies found people’s online self-disclosure being the result of privacy risk and benefit perceptions, the present study extended this so-called privacy calculus approach by additionally focusing on privacy protection by means of a tool. Furthermore, it is important to understand contextual differences in privacy behaviors as well as characteristics of privacy-protecting tools that may affect usage intention. Results of an online experiment (N = 511) supported the basic notion of the privacy calculus and revealed that perceived privacy risks were strongly related to participants’ desired privacy protection which, in turn, was positively related to the willingness to use a privacy-protecting tool. Self-disclosure was found to be context dependent, whereas privacy protection was not. Moreover, participants would rather forgo using a tool that records their data, although this was described to enhance privacy protection.

2010 ◽  
Vol 25 (2) ◽  
pp. 109-125 ◽  
Author(s):  
Hanna Krasnova ◽  
Sarah Spiekermann ◽  
Ksenia Koroleva ◽  
Thomas Hildebrand

On online social networks such as Facebook, massive self-disclosure by users has attracted the attention of Industry players and policymakers worldwide. Despite the Impressive scope of this phenomenon, very little Is understood about what motivates users to disclose personal Information. Integrating focus group results Into a theoretical privacy calculus framework, we develop and empirically test a Structural Equation Model of self-disclosure with 259 subjects. We find that users are primarily motivated to disclose Information because of the convenience of maintaining and developing relationships and platform enjoyment. Countervailing these benefits, privacy risks represent a critical barrier to information disclosure. However, users’ perception of risk can be mitigated by their trust in the network provider and availability of control options. Based on these findings, we offer recommendations for network providers.


2020 ◽  
Vol 19 (3) ◽  
pp. 219-232
Author(s):  
Wahyu Rahardjo ◽  
Nurul Qomariyah ◽  
Matrissya Hermita ◽  
Ruddy J. Suhatril ◽  
Mochammad Akbar Marwan ◽  
...  

Adolescences' excessive online self-disclosure is now a social phenomenon arising in social media use. The adolescences also tend to share their privacy. This study aims to determine whether extraversion personality, perceived privacy risks, the convenience of maintaining relationships, and online self-presentation influence self-disclosure in adolescents. This study involved 619 adolescents (185 male and 434 female) aged 13-22 years (M = 19.39, SD = 1.83). The participants are active social media users collected from several areas in Indonesia. Multiple regression analysis is used to test the hypothesis. The results show that several variables simultaneously affect online self-disclosure in adolescents (R2 = .422; F (4, 614) = 111.944, p < .01). However, in details, online self-presentation does not have a significant effect on online self-disclosure among adolescents. This result shows that personality factors and adolescent perceptions of the low privacy risk on social media, as well as the goal of maintaining social relations with other members of social media, encourage them to be more online disclose on social media.


Kybernetes ◽  
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Naurin Farooq Khan ◽  
Naveed Ikram ◽  
Hajra Murtaza ◽  
Muhammad Aslam Asadi

PurposeThis study aims to investigate the cybersecurity awareness manifested as protective behavior to explain self-disclosure in social networking sites. The disclosure of information about oneself is associated with benefits as well as privacy risks. The individuals self-disclose to gain social capital and display protective behaviors to evade privacy risks by careful cost-benefit calculation of disclosing information.Design/methodology/approachThis study explores the role of cyber protection behavior in predicting self-disclosure along with demographics (age and gender) and digital divide (frequency of Internet access) variables by conducting a face-to-face survey. Data were collected from 284 participants. The model is validated by using multiple hierarchal regression along with the artificial intelligence approach.FindingsThe results revealed that cyber protection behavior significantly explains the variance in self-disclosure behavior. The complementary use of five machine learning (ML) algorithms further validated the model. The ML algorithms predicted self-disclosure with an area under the curve of 0.74 and an F1 measure of 0.70.Practical implicationsThe findings suggest that costs associated with self-disclosure can be mitigated by educating the individuals to heighten their cybersecurity awareness through cybersecurity training programs.Originality/valueThis study uses a hybrid approach to assess the influence of cyber protection behavior on self-disclosure using expectant valence theory (EVT).


10.2196/13046 ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. e13046 ◽  
Author(s):  
Mengchun Gong ◽  
Shuang Wang ◽  
Lezi Wang ◽  
Chao Liu ◽  
Jianyang Wang ◽  
...  

Background Patient privacy is a ubiquitous problem around the world. Many existing studies have demonstrated the potential privacy risks associated with sharing of biomedical data. Owing to the increasing need for data sharing and analysis, health care data privacy is drawing more attention. However, to better protect biomedical data privacy, it is essential to assess the privacy risk in the first place. Objective In China, there is no clear regulation for health systems to deidentify data. It is also not known whether a mechanism such as the Health Insurance Portability and Accountability Act (HIPAA) safe harbor policy will achieve sufficient protection. This study aimed to conduct a pilot study using patient data from Chinese hospitals to understand and quantify the privacy risks of Chinese patients. Methods We used g-distinct analysis to evaluate the reidentification risks with regard to the HIPAA safe harbor approach when applied to Chinese patients’ data. More specifically, we estimated the risks based on the HIPAA safe harbor and limited dataset policies by assuming an attacker has background knowledge of the patient from the public domain. Results The experiments were conducted on 0.83 million patients (with data field of date of birth, gender, and surrogate ZIP codes generated based on home address) across 33 provincial-level administrative divisions in China. Under the Limited Dataset policy, 19.58% (163,262/833,235) of the population could be uniquely identifiable under the g-distinct metric (ie, 1-distinct). In contrast, the Safe Harbor policy is able to significantly reduce privacy risk, where only 0.072% (601/833,235) of individuals are uniquely identifiable, and the majority of the population is 3000 indistinguishable (ie the population is expected to share common attributes with 3000 or less people). Conclusions Through the experiments based on real-world patient data, this work illustrates that the results of g-distinct analysis about Chinese patient privacy risk are similar to those from a previous US study, in which data from different organizations/regions might be vulnerable to different reidentification risks under different policies. This work provides reference to Chinese health care entities for estimating patients’ privacy risk during data sharing, which laid the foundation of privacy risk study about Chinese patients’ data in the future.


Author(s):  
Barbara Sandfuchs

To fight the risks caused by excessive self-disclosure especially regarding sensitive data such as genetic ones, it might be desirable to prevent certain disclosures. When doing so, regulators traditionally compel protection, for example by prohibiting the collection and/or use of genetic data even if citizens would like to share these data. This chapter provides an introduction into an alternative approach which has recently received increased scholarly attention: privacy protection by the use of nudges. Such nudges may in the future provide an alternative to compelled protection of genetic data or complement the traditional approach. This chapter first describes behavioral psychology's findings that citizens sometimes act irrational. This statement is consequently explained with the insights that these irrationalities are often predictable. Thus, a solution might be to correct them by the use of nudges.


2018 ◽  
Vol 62 (10) ◽  
pp. 1392-1412 ◽  
Author(s):  
Hsuan-Ting Chen

This study builds on the privacy calculus model to revisit the privacy paradox on social media. A two-wave panel data set from Hong Kong and a cross-sectional data set from the United States are used. This study extends the model by incorporating privacy self-efficacy as another privacy-related factor in addition to privacy concerns (i.e., costs) and examines how these factors interact with social capital (i.e., the expected benefit) in influencing different privacy management strategies, including limiting profile visibility, self-disclosure, and friending. This study proposed and found a two-step privacy management strategy in which privacy concerns and privacy self-efficacy prompt users to limit their profile visibility, which in turn enhances their self-disclosing and friending behaviors in both Hong Kong and the United States. Results from the moderated mediation analyses further demonstrate that social capital strengthens the positive–direct effect of privacy self-efficacy on self-disclosure in both places, and it can mitigate the direct effect of privacy concerns on restricting self-disclosure in Hong Kong (the conditional direct effects). Social capital also enhances the indirect effect of privacy self-efficacy on both self-disclosure and friending through limiting profile visibility in Hong Kong (the conditional indirect effects). Implications of the findings are discussed.


2019 ◽  
Vol 27 (3) ◽  
pp. 366-375
Author(s):  
Luca Bonomi ◽  
Xiaoqian Jiang ◽  
Lucila Ohno-Machado

Abstract Objective Survival analysis is the cornerstone of many healthcare applications in which the “survival” probability (eg, time free from a certain disease, time to death) of a group of patients is computed to guide clinical decisions. It is widely used in biomedical research and healthcare applications. However, frequent sharing of exact survival curves may reveal information about the individual patients, as an adversary may infer the presence of a person of interest as a participant of a study or of a particular group. Therefore, it is imperative to develop methods to protect patient privacy in survival analysis. Materials and Methods We develop a framework based on the formal model of differential privacy, which provides provable privacy protection against a knowledgeable adversary. We show the performance of privacy-protecting solutions for the widely used Kaplan-Meier nonparametric survival model. Results We empirically evaluated the usefulness of our privacy-protecting framework and the reduced privacy risk for a popular epidemiology dataset and a synthetic dataset. Results show that our methods significantly reduce the privacy risk when compared with their nonprivate counterparts, while retaining the utility of the survival curves. Discussion The proposed framework demonstrates the feasibility of conducting privacy-protecting survival analyses. We discuss future research directions to further enhance the usefulness of our proposed solutions in biomedical research applications. Conclusion The results suggest that our proposed privacy-protection methods provide strong privacy protections while preserving the usefulness of survival analyses.


2020 ◽  
Vol 37 (4) ◽  
pp. 457-472
Author(s):  
Alisa Frik ◽  
Alexia Gaudeul

Purpose Many online transactions and digital services depend on consumers’ willingness to take privacy risks, such as when shopping online, joining social networks, using online banking or interacting with e-health platforms. Their decisions depend on not only how much they would suffer if their data were revealed but also how uncomfortable they feel about taking such a risk. Such an aversion to risk is a neglected factor when evaluating the value of privacy. The aim of this paper is to propose an empirical method to measure both privacy risk aversion and privacy worth and how those affect privacy decisions. Design/methodology/approach The authors let individuals play privacy lotteries and derive a measure of the value of privacy under risk (VPR) and empirically test the validity of this measure in a laboratory experiment with 148 participants. Individuals were asked to make a series of incentivized decisions on whether to incur the risk of revealing private information to other participants. Findings The results confirm that the willingness to incur a privacy risk is driven by a complex array of factors, including risk aversion, self-reported value for private information and general attitudes to privacy (derived from surveys). The VPR does not depend on whether there is a preexisting threat to privacy. The authors find qualified support for the existence of an order effect, whereby presenting financial choices prior to privacy ones leads to less concern for privacy. Practical implications Attitude to risk in the domain of privacy decisions is largely understudied. In this paper, the authors take a first step toward closing this empirical and methodological gap by offering (and validating) a method for the incentivized elicitation of the implicit VPR and proposing a robust and meaningful monetary measure of the level of aversion to privacy risks. This measure is a crucial step in designing and implementing the practical strategies for evaluating privacy as a competitive advantage and designing markets for privacy risk regulations (e.g. through cyber insurances). Social implications The present study advances research on the economics of consumer privacy – one of the most controversial topics in the digital age. In light of the proliferation of privacy regulations, the mentioned method for measuring the VPR provides an important instrument for policymakers’ informed decisions regarding what tradeoffs consumers consider beneficial and fair and where to draw the line for violations of consumers’ expectations, preferences and welfare. Originality/value The authors present a novel method to measure the VPR that takes account of both the value of private information to consumers and their tolerance for privacy risks. The authors explain how this method can be used more generally to elicit attitudes to a wide range of privacy risks involving exposure of various types of private information.


2017 ◽  
Vol 15 (3) ◽  
pp. 328-335 ◽  
Author(s):  
Robin Wilton

Purpose This paper aims to provide a non-academic perspective on the research reports of the JICES “Post-Snowden” special edition, from the viewpoint of a privacy advocate with an IT background. Design/methodology/approach This paper was written after reviewing the country reports for Japan, New Zealand, PRC and Taiwan, Spain and Sweden, as well as the Introduction paper. The author has also drawn on online sources such as news articles to substantiate his analysis of attitudes to technical privacy protection post-Snowden. Findings Post-Snowden, the general perception of threats to online privacy has shifted from a predominant focus on commercial threats to a recognition that government activities, in the sphere of intelligence and national security, also give rise to significant privacy risk. Snowden’s disclosures have challenged many of our assumptions about effective oversight of interception capabilities. Citizens’ expectations in this regard depend partly on national experience of the relationship between citizen and government, and can evolve rapidly. The tension between legitimate law enforcement access and personal privacy remains challenging to resolve. Originality/value As a “viewpoint” paper, this submission draws heavily on the author’s experience as a privacy and technology subject-matter expert. Although it therefore contains a higher proportion of opinion than the academic papers in this issue, his hope is that it will stimulate debate and further research.


Sign in / Sign up

Export Citation Format

Share Document