scholarly journals CAP-A: A Suite of Tools for Data Privacy Evaluation of Mobile Applications

Author(s):  
Ioannis Chrysakis ◽  
Giorgos Flouris ◽  
George Ioannidis ◽  
Maria Makridaki ◽  
Theodore Patkos ◽  
...  

The utilisation of personal data by mobile apps is often hidden behind vague Privacy Policy documents, which are typically lengthy, difficult to read (containing legal terms and definitions) and frequently changing. This paper discusses a suite of tools developed in the context of the CAP-A project, aiming to harness the collective power of users to improve their privacy awareness and to promote privacy-friendly behaviour by mobile apps. Through crowdsourcing techniques, users can evaluate the privacy friendliness of apps, annotate and understand Privacy Policy documents, and help other users become aware of privacy-related aspects of mobile apps and their implications, whereas developers and policy makers can identify trends and the general stance of the public in privacy-related matters. The tools are available for public use in: https://cap-a.eu/tools/.

Author(s):  
Ioannis Chrysakis ◽  
Giorgos Flouris ◽  
George Ioannidis ◽  
Maria Makridaki ◽  
Theodore Patkos ◽  
...  

Consumers are largely unaware regarding the use being made to the data that they generate through smart devices, or their GDPR-compliance, since such information is typically hidden behind vague privacy policy documents, which are often lengthy, difficult to read (containing legal terms and definitions) and frequently changing. This paper describes the activities of the CAP-A project, whose aim is to apply crowdsourcing techniques to evaluate the privacy friendliness of apps, and to allow users to better understand the content of Privacy Policy documents and, consequently, the privacy implications of using any given mobile app. To achieve this, we developed a set of tools that aim at assisting users to express their own privacy concerns and expectations and assess the mobile apps’ privacy properties through collective intelligence.


Author(s):  
Zerin Mahzabin Khan ◽  
Rukhsana Ahmed ◽  
Devjani Sen

No previous research on cancer mobile applications (apps) has investigated issues associated with the data privacy of its consumers. The current chapter addressed this gap in the literature by assessing the content of online privacy policies of selected cancer mobile apps through applying a checklist and performing an in-depth critical analysis to determine how the apps communicated their privacy practices to end users. The results revealed that the privacy policies were mostly ambiguous, with content often presented in a complex manner and inadequate information on the ownership, use, disclosure, retention, and collection of end users' personal data. These results highlight the importance of improving the transparency of privacy practices in health and fitness cancer mobile apps to clearly and effectively communicate how end users' personal data are collected, stored, and shared. The chapter concludes with recommendations and discussion on practical implications for stakeholders like cancer app users, developers, policymakers, and clinicians.


Author(s):  
Zerin Mahzabin Khan ◽  
Rukhsana Ahmed ◽  
Devjani Sen

No previous research on cancer mobile applications (apps) has investigated issues associated with the data privacy of its consumers. The current chapter addressed this gap in the literature by assessing the content of online privacy policies of selected cancer mobile apps through applying a checklist and performing an in-depth critical analysis to determine how the apps communicated their privacy practices to end users. The results revealed that the privacy policies were mostly ambiguous, with content often presented in a complex manner and inadequate information on the ownership, use, disclosure, retention, and collection of end users' personal data. These results highlight the importance of improving the transparency of privacy practices in health and fitness cancer mobile apps to clearly and effectively communicate how end users' personal data are collected, stored, and shared. The chapter concludes with recommendations and discussion on practical implications for stakeholders like cancer app users, developers, policymakers, and clinicians.


Author(s):  
Zerin Mahzabin Khan ◽  
Rukhsana Ahmed ◽  
Devjani Sen

No previous research on cancer mobile applications (apps) has investigated issues associated with the data privacy of its consumers. The current chapter addressed this gap in the literature by assessing the content of online privacy policies of selected cancer mobile apps through applying a checklist and performing an in-depth critical analysis to determine how the apps communicated their privacy practices to end users. The results revealed that the privacy policies were mostly ambiguous, with content often presented in a complex manner and inadequate information on the ownership, use, disclosure, retention, and collection of end users' personal data. These results highlight the importance of improving the transparency of privacy practices in health and fitness cancer mobile apps to clearly and effectively communicate how end users' personal data are collected, stored, and shared. The chapter concludes with recommendations and discussion on practical implications for stakeholders like cancer app users, developers, policymakers, and clinicians.


2022 ◽  
Vol 6 (GROUP) ◽  
pp. 1-14
Author(s):  
Lindah Kotut ◽  
D. Scott McCrickard

Privacy policy and term agreement documents are considered the gateway for software adoption and use. The documents provide a means for the provider to outline expectations of the software use, and also provide an often-separate document outlining how user data is collected, stored, and used--including if it is shared with other parties. A user agreeing with the terms, assumes that they have a full understanding the terms of the agreement and have provided consent. Often however, users do not read the documents because they are long and full of legalistic and inconsistent language, are regularly amended, and may not disclose all the details on what is done to the user data. Enforcing compliance and ensuring user consent have been persistent challenges to policy makers and privacy researchers. This design fiction puts forward an alternate reality and presents a policy-based approach to fording the consent gap with the TL;DR Charter: an agreement governing the parties involved by harnessing the power of formal governments, industry, and other stakeholders, and taking users expectation of privacy into account. The Charter allows us as researchers to examine the implications on trust, decision-making, consent, accountability and the impact of future technologies.


2019 ◽  
Author(s):  
José Javier Flors-Sidro ◽  
Mowafa Househ ◽  
Alaa Abd-Alrazaq ◽  
Josep Vidal-Alaball ◽  
Luis Fernandez-Luque ◽  
...  

BACKGROUND Mobile health has become a major channel for the support of people living with diabetes. Accordingly, the availability of diabetes mobile apps has been steadily increasing. Most of the previous reviews of diabetes apps have focused on the apps’ features and their alignment with clinical guidelines. However, there is a lack of knowledge on the actual compliance of diabetes apps with privacy and data security aspects. OBJECTIVE The aim of this study was to assess the level of privacy of diabetes mobile applications to contribute to raising the awareness of final users, developers and data-protection governmental regulators towards privacy issues. METHODS A web scraper capable of retrieving Android apps’ privacy-related information, particularly the dangerous permissions required by the apps, was developed with the aim of analyzing privacy aspects related to diabetes apps. Following the research selection criteria, the original 882 apps were narrowed down to 497 apps, which were finally included in the analysis. RESULTS 60% of diabetes apps may request dangerous permissions, which poses a significant risk for the users’ data privacy. In addition, 30% of the apps do not return their privacy policy website. Moreover, it was found that 40% of apps contain advertising, and that some apps that declared not to contain it actually had ads. 95.4% of the apps were free of cost, and those belonging to the Medical and Health and Fitness categories were the most popular. However, final users do not always realize that the free-apps’ business model is largely based on advertising, and consequently, on sharing or selling their private data, either directly or indirectly, to unknown third-parties. CONCLUSIONS The aforementioned findings unquestionably confirm the necessity to educate users and raise their awareness regarding diabetes apps privacy aspects. For this purpose, this research recommends properly and comprehensively training users, ensuring that governments and regulatory bodies enforce strict data protection laws, devising much tougher security policies and protocols in Android and in the Google Play Store, and the implication and supervision of all stakeholders in the apps’ development process.


2020 ◽  
Vol 25 (5) ◽  
pp. 4057-4076 ◽  
Author(s):  
Sheshadri Chatterjee ◽  
Dipasree Majumdar ◽  
Sanjay Misra ◽  
Robertas Damaševičius

AbstractThe purpose of this study is to identify the factors that can impact the adoption of mobile apps for teaching-learning process focusing on the girls’ school in rural India. The hypotheses were proposed and a conceptual model has been developed. There is a survey work conducted to collect the data from different respondents using a convenience sampling method. The model has been validated statistically through PLS-SEM analysis covering feedbacks of 271 effective respondents. The study highlights the impact of different antecedents of the behavioural intention of the students of using mobile applications for teaching-learning process. The results also show that among other issues, price value has insignificant influence on the intention of the girl students of the rural India. During survey feedbacks have been obtained from the 271 respondents, which is meagre compared to vastness of the population and school of rural India. Only few predictors have been considered leaving possibilities of inclusion of other boundary conditions to enhance the explanative power more than that has been achieved in the proposed model with the explanative power of 81%. The model has provided laudable inputs to the educational policy makers and technology enablers and administrators to understand the impact of the mobile applications on the rural girls’ school of India and facilitate the development of m-learning. Very few studies been conducted to explore the impact of mobile applications on the school education of rural India especially focusing on the girls’ schools.


2019 ◽  
Vol 6 (1) ◽  
pp. 205395171984878
Author(s):  
Luke Munn ◽  
Tsvetelina Hristova ◽  
Liam Magee

Personal data is highly vulnerable to security exploits, spurring moves to lock it down through encryption, to cryptographically ‘cloud’ it. But personal data is also highly valuable to corporations and states, triggering moves to unlock its insights by relocating it in the cloud. We characterise this twinned condition as ‘clouded data’. Clouded data constructs a political and technological notion of privacy that operates through the intersection of corporate power, computational resources and the ability to obfuscate, gain insights from and valorise a dependency between public and private. First, we survey prominent clouded data approaches (blockchain, multiparty computation, differential privacy, and homomorphic encryption), suggesting their particular affordances produce distinctive versions of privacy. Next, we perform two notional code-based experiments using synthetic datasets. In the field of health, we submit a patient’s blood pressure to a notional cloud-based diagnostics service; in education, we construct a student survey that enables aggregate reporting without individual identification. We argue that these technical affordances legitimate new political claims to capture and commodify personal data. The final section broadens the discussion to consider the political force of clouded data and its reconstitution of traditional notions such as the public and the private.


Author(s):  
Sema Bulat Demir ◽  
Ayten Övür

Nowadays, social media platforms are frequently being used on the Internet. When the users create an account for these platforms, they are required to accept the data privacy policy. With the approval of the data policy, major problems may arise such as observing every activity of users on the platform, violations of security and protection of personal data, and sharing user data with third parties for commercial purposes. In this regard, it is significant to examine the privacy policies of social media platforms in detail. In this research, we examined the privacy policies of the five most popular free applications on the communication section of the Google Play Store on January 30th, 2021. The privacy policies of these applications were analyzed with the content analysis method, and the research aims to reveal the area of utilization of the data that the users provide, with or without the permission of the user.


Author(s):  
Ilaria Liccardi ◽  
Joseph Pato ◽  
Daniel J. Weitzner

Our personal information, habits, likes and dislikes can be all deduced from our mobile devices. Safeguarding mobile privacy is therefore of great concern. Transparency and individual control are bedrock principles of privacy but making informed choices about which mobile apps to use has been shown to be difficult. In order to understand the dynamics of information collection in mobile apps and to demonstrate the value of transparent access to the details of mobile applications information access permissions, we have gathered information about 528,433 apps on Google Play, and analyzed the permissions requested by each app. We develop a quantitative measure of the risk posed by apps by devising a ‘sensitivity score’ to represent the number of occurrences of permissions that read personal information about users where network communication is possible. We found that 54% of apps do not access any personal data. The remaining 46% collect between 1 to 20 sensitive permissions and have the ability to transmit it outside the phone. The sensitivity of apps differs greatly between free and paid apps as well as between categories and content rating. Sensitive permissions are often mixed with a large amount of low-risk permissions and hence are difficult to identify. Easily available sensitivity scores could help users making more informed decision about choosing an app that could pose less risk in collecting personal information. Even though an app is “self-described” to be suitable for a certain subset of users (i.e children) it might contain content ratings and permission requests that are not appropriate or expected. Our experience in doing this research shows that it is difficult to obtain information about how personal data collected from apps is used or analyzed. In fact only 0.37% (1,991) of the collected apps show to have declared a “privacy policy”. Therefore, in order to make real control available to mobile users, apps distribution platforms should provide more detailed information about how their data if accessed is used. To achieve greater transparency and individual control, apps distribution platforms which do not currently make raw permission description accessible for analysis could change their design and operating policies to make this data available prior to installation.


Sign in / Sign up

Export Citation Format

Share Document