Towards Designing E-Services that Protect Privacy

2010 ◽  
Vol 1 (2) ◽  
pp. 18-34 ◽  
Author(s):  
George O. M. Yee

The growth of electronic services (e-services) has resulted in large amounts of personal information in the hands of service organizations like banks, insurance companies, and online retailers. This has led to the realization that such information must be protected, not only to comply with privacy regulations but also and more importantly, to attract clients. One important dimension of this goal is to design e-services that protect privacy. In this paper, the author proposes a design approach that incorporates privacy risk analysis of UML diagrams to minimize privacy risks in the final design. The approach iterates between the risk analysis and design modifications to eliminate the risks until a design is obtained that is close to being risk free.

Author(s):  
George O. M. Yee

The growth of electronic services (e-services) has resulted in large amounts of personal information in the hands of service organizations like banks, insurance companies, and online retailers. This has led to the realization that such information must be protected, not only to comply with privacy regulations but also and more importantly, to attract clients. One important dimension of this goal is to design e-services that protect privacy. In this paper, the author proposes a design approach that incorporates privacy risk analysis of UML diagrams to minimize privacy risks in the final design. The approach iterates between the risk analysis and design modifications to eliminate the risks until a design is obtained that is close to being risk free.


Author(s):  
Devjani Sen ◽  
Rukhsana Ahmed

With a growing number of health and wellness applications (apps), there is a need to explore exactly what third parties can legally do with personal data. Following a review of the online privacy policies of a select set of mobile health and fitness apps, this chapter assessed the privacy policies of four popular health and fitness apps, using a checklist that comprised five privacy risk categories. Privacy risks, were based on two questions: a) is important information missing to make informed decisions about the use of personal data? and b) is information being shared that might compromise the end-user's right to privacy of that information? The online privacy policies of each selected app was further examined to identify important privacy risks. From this, a separate checklist was completed and compared to reach an agreement of the presence or absence of each privacy risk category. This chapter concludes with a set of recommendations when designing privacy policies for the sharing of personal information collected from health and fitness apps.


Author(s):  
Devjani Sen ◽  
Rukhsana Ahmed

With a growing number of health and wellness applications (apps), there is a need to explore exactly what third parties can legally do with personal data. Following a review of the online privacy policies of a select set of mobile health and fitness apps, this chapter assessed the privacy policies of four popular health and fitness apps, using a checklist that comprised five privacy risk categories. Privacy risks, were based on two questions: a) is important information missing to make informed decisions about the use of personal data? and b) is information being shared that might compromise the end-user's right to privacy of that information? The online privacy policies of each selected app was further examined to identify important privacy risks. From this, a separate checklist was completed and compared to reach an agreement of the presence or absence of each privacy risk category. This chapter concludes with a set of recommendations when designing privacy policies for the sharing of personal information collected from health and fitness apps.


2010 ◽  
Vol 25 (2) ◽  
pp. 109-125 ◽  
Author(s):  
Hanna Krasnova ◽  
Sarah Spiekermann ◽  
Ksenia Koroleva ◽  
Thomas Hildebrand

On online social networks such as Facebook, massive self-disclosure by users has attracted the attention of Industry players and policymakers worldwide. Despite the Impressive scope of this phenomenon, very little Is understood about what motivates users to disclose personal Information. Integrating focus group results Into a theoretical privacy calculus framework, we develop and empirically test a Structural Equation Model of self-disclosure with 259 subjects. We find that users are primarily motivated to disclose Information because of the convenience of maintaining and developing relationships and platform enjoyment. Countervailing these benefits, privacy risks represent a critical barrier to information disclosure. However, users’ perception of risk can be mitigated by their trust in the network provider and availability of control options. Based on these findings, we offer recommendations for network providers.


2021 ◽  
Author(s):  
R. Jason Cronk ◽  
Stuart S. Shapiro
Keyword(s):  

2021 ◽  
Vol 10 (3) ◽  
pp. 283-306
Author(s):  
Yannic Meier ◽  
Johanna Schäwel ◽  
Nicole C. Krämer

Using privacy-protecting tools and reducing self-disclosure can decrease the likelihood of experiencing privacy violations. Whereas previous studies found people’s online self-disclosure being the result of privacy risk and benefit perceptions, the present study extended this so-called privacy calculus approach by additionally focusing on privacy protection by means of a tool. Furthermore, it is important to understand contextual differences in privacy behaviors as well as characteristics of privacy-protecting tools that may affect usage intention. Results of an online experiment (N = 511) supported the basic notion of the privacy calculus and revealed that perceived privacy risks were strongly related to participants’ desired privacy protection which, in turn, was positively related to the willingness to use a privacy-protecting tool. Self-disclosure was found to be context dependent, whereas privacy protection was not. Moreover, participants would rather forgo using a tool that records their data, although this was described to enhance privacy protection.


10.2196/13046 ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. e13046 ◽  
Author(s):  
Mengchun Gong ◽  
Shuang Wang ◽  
Lezi Wang ◽  
Chao Liu ◽  
Jianyang Wang ◽  
...  

Background Patient privacy is a ubiquitous problem around the world. Many existing studies have demonstrated the potential privacy risks associated with sharing of biomedical data. Owing to the increasing need for data sharing and analysis, health care data privacy is drawing more attention. However, to better protect biomedical data privacy, it is essential to assess the privacy risk in the first place. Objective In China, there is no clear regulation for health systems to deidentify data. It is also not known whether a mechanism such as the Health Insurance Portability and Accountability Act (HIPAA) safe harbor policy will achieve sufficient protection. This study aimed to conduct a pilot study using patient data from Chinese hospitals to understand and quantify the privacy risks of Chinese patients. Methods We used g-distinct analysis to evaluate the reidentification risks with regard to the HIPAA safe harbor approach when applied to Chinese patients’ data. More specifically, we estimated the risks based on the HIPAA safe harbor and limited dataset policies by assuming an attacker has background knowledge of the patient from the public domain. Results The experiments were conducted on 0.83 million patients (with data field of date of birth, gender, and surrogate ZIP codes generated based on home address) across 33 provincial-level administrative divisions in China. Under the Limited Dataset policy, 19.58% (163,262/833,235) of the population could be uniquely identifiable under the g-distinct metric (ie, 1-distinct). In contrast, the Safe Harbor policy is able to significantly reduce privacy risk, where only 0.072% (601/833,235) of individuals are uniquely identifiable, and the majority of the population is 3000 indistinguishable (ie the population is expected to share common attributes with 3000 or less people). Conclusions Through the experiments based on real-world patient data, this work illustrates that the results of g-distinct analysis about Chinese patient privacy risk are similar to those from a previous US study, in which data from different organizations/regions might be vulnerable to different reidentification risks under different policies. This work provides reference to Chinese health care entities for estimating patients’ privacy risk during data sharing, which laid the foundation of privacy risk study about Chinese patients’ data in the future.


Author(s):  
Georgios Michaelides ◽  
Gábor Hosszú

The importance of the virtual communities’ privacy and security problems comes into prominence by the rapid development of online social networks. This article presents the multiple threats currently plaguing the virtual world, Internet privacy risks, and recommendations and countermeasures to avoid such problems. New generations of users feel comfortable publishing their personal information and narrating their lives. They are often unaware how vulnerable the data in their public profiles are, which a large audience daily accesses. A so-called digital friendship is built among them. Such commercial and social pressures have led to a number of privacy and security risks for social network members. The article presents the most important vulnerabilities and suggests protection methods and solutions that can be utilized according to the threat. Lastly, the authors introduce the concept of a privacy-friendly virtual community site, named CWIW, where privacy methods have been implemented for better user protection.


Author(s):  
Siani Pearson ◽  
Tomas Sander

Regulatory compliance in areas such as privacy has become a major challenge for organizations. In large organizations there can be hundreds or thousands of projects that involve personal information. Ensuring that all those projects properly take privacy considerations into account is a complex challenge for accountable privacy management. Accountable privacy management requires that an organization makes sure that all relevant projects are in compliance and that there is evidence and assurance that this actually is the case. To date, there has been no suitable automated, scalable support for accountable privacy management; it is such a tool that the authors describe in this chapter. Specifically, they describe a privacy risk assessment and compliance tool which they are developing and rolling out within a large, global company – called HP Privacy Advisor (HP PA) – and its generalisation and extension. The authors also bring out those security, privacy, risk, and trust-related aspects they have been researching related to this work in particular.


Sign in / Sign up

Export Citation Format

Share Document