scholarly journals ‘IT’S MY FAULT FOR POSTING IN THE FIRST PLACE’: HOW INDIVIDUAL RESPONSIBILITY AND SELF-BLAME ARE SUSTAINED AND INTERNALIZED

Author(s):  
Tony Liao ◽  
Haley Fite

Data breaches and data misuse are frequent occurrences in today’s digital society and often spark debate over who should hold responsibility. While many hold the platforms responsible when confronted with violations of data privacy, some users shift the blame inward for trusting the platforms and posting on them. While a large body of research has dedicated itself to issues of data privacy, discourses of individual responsibility and the internalization of user self-blame have received less attention. This study explores how users respond to unknown use of their personal data through the case of CrystalKnows, a personality detection algorithm that generated profiles about individuals using unknown data sources, often without the user’s knowledge. Founded in 2015, CrystalKnows claims to have the world’s largest personality database, providing and selling algorithmically generated user profiles, often without the express consent of these individuals. Interviews were conducted with individuals whose profiles appeared on the platform (N=37) to reveal users’ reactions and rationalizations of the data collected about them. Rationales of self-blame vary but commonly center issues of ambiguity concerning digital consent and the algorithm itself. Ultimately, these contribute to feelings of resignation often paired with the unrealistic alternative of total platform non-use. We argue that these complex discourses of self-blame, independence/choice, and resignation/non-use as the only options are intertwined with data privacy reform efforts. Understanding the sources of self-blame and how deep it runs is an important step to interrogating and refuting some of these assumptions, if broader reforms hope to garner support and implementation.

2019 ◽  
Vol 2019 ◽  
Author(s):  
Brady Robards ◽  
Benjamin Lyall ◽  
Claire Moran ◽  
Jean Burgess ◽  
Kath Albury ◽  
...  

A considerable amount of personal data is now collected on and by individuals: footsteps on Fitbits, screen time in Apple’s iOS, conversations on dating apps, sleeping patterns in baby tracker apps, and viewing habits on Netflix and YouTube. What value do these data have, for individuals but also for corporations, governments, and researchers? When these data are provided back to users, how do people make sense of it? What ‘truth claims’ do quantified personal data make? How do we navigate anxieties around datafied selves, and in what ways are bodies rendered visible or invisible through processes of datafication in digital society? In this panel we explore these questions through four papers centered on the notion of the “data-selfie.” Data-selfies take different forms, including but not limited to:- Visuals that reference the “status” or “progress” of a user’s physical body, as in 3-D scans, or charts generated by self-monitoring apps for health and fitness.- Visuals that reference the remapping of photographic self-expression to biometric, corporate and state surveillance, such as airport facial recognition check points that ask flyers to pose for a selfie, or sex offender databases that now contain images first posted to hook up apps by consenting teenagers.- Representations of the embodied or commoditized self, produced not as stand-alone expression, but as conversational prompts that encourage qualitative, “story-driven” data, in the interests of pedagogy, therapy, activism, etc.- Profiles that reference users as “targets” whose chief value is the metadata they generate. Using proprietary algorithms, platforms mine this metadata—which can include information about a users’ device, physical location, and their activities online—categorizing it for internal use, and selling it to third parties interested in influencing the consumer, social and/or political preferences of the “targets” in question. In Paper 1, Authors 1, 2, and 3 develop a new conceptualisation for understanding how individuals reveal themselves through their own quantified personal data. They call this the ‘confessional data selfie’. Drawing on a sample of 59 examples from the top posts in subreddit r/DataIsBeautiful, they argue that the confessional data selfie represents an aspect of one’s self, through visualisations of personal data, inviting analysis, eliciting responses and personal story-telling, and opening one’s life up to others. In Paper 2, Authors 4, 5, and 6 take a political economy of communication approach to analyse the data markets of dating apps. They consider three cases: Grindr, Match Group (parent company of Tinder), and Bumble. Drawing on trade press reportage, financial reports, and other materials associated with the apps and publishers in question, they point to the increased global concentration in ownership of dating app services and raise questions about the ways in which dating apps are now in the ‘data business’, using personal data to profile users and monetise private interactions. In Paper 3, Author 7 reports on experiences of ‘data anxiety’ among older people in Australia. Author 7 draws on data literacy workshops, home-based interviews and focus groups with older internet users, that led to discussions of control over personal data, control over social interactions, and the resulting implications for exposure, openness, and visibility. Also key to this study was the taking and sharing of selfies in a closed Facebook group, serving as the starting point for reflections on these various experiences of control. Many of these older participants questioned whether or not ongoing participation in social media and broader data structures were ‘worthwhile’. This raises broader questions about the extent to which users are willing to sacrifice control over personal data - or the feeling of control - in order to participate and be visible. Finally, in Paper 4, Author 8 asks: when is the face data? Moving from examples of ‘deepfake’ video exhibitions to Google Art as a repository of ‘face-data’ as cultural and social capital, Author 8 goes on to examine how notions of face-as-data apply to individuals living with the neurological condition of autism. Can facial recognition apps help people with autism to read and decode human expressions? Taken together, these four papers each engage with questions about the relationship between personal data and broader structures of power and representation: from corporations like Grindr and Tinder using dating app data to profile users, to Google using uploaded selfies to train facial recognition algorithms; through to re-purposing and narrativising personal data as part of practices of self-representation; and the feelings of anxiety, unease or creepiness that accompany the increased datafication of personal identity. Self-representation is also a key recurrent thread in these papers, from confessional data selfies as acts of revelation through personal quantified data, through to the photographic selfie as a research exercise that prompts discussions of control and data privacy.


A breach of data is a reported occurrence where private, sensitive, or covered records have been compromised and/or released unlawfully mostly due to cyber attacks or theft. Breach of data can include personal health records, personal information, travel information, trade secrets, intellectual property, or information you provided to or is stored on a platform. Data revealed to breaches pose a security and privacy risk to Users around the world. Despite these, guidelines on how organizations can react to breaches, or how to manage information securely once it has leaked, still haveto be established. More than 3 billion people suffered and became victims of data breaches and cyber attacks in the last two decades leading to loss of personal data as well as monetary loss. This research paper conducts real time research about awareness of data privacy, kind of data/information that needs to be protected, basic protocols for staying safe online, and some of the biggest corporate data breaches that happened in this century. We bring people from different cities of India in this study through a survey and use the data provided by these 150 participants to examine their understanding of data privacy, their concern regarding their online data and the practices they follow in their daily life to keep their online data safe in this age of computers and internet.


Author(s):  
Daniel Amo ◽  
David Fonseca ◽  
Marc Alier ◽  
Francisco José García-Peñalvo ◽  
María José Casañ ◽  
...  

2021 ◽  
Vol 4 ◽  
Author(s):  
Vibhushinie Bentotahewa ◽  
Chaminda Hewage ◽  
Jason Williams

The growing dependency on digital technologies is becoming a way of life, and at the same time, the collection of data using them for surveillance operations has raised concerns. Notably, some countries use digital surveillance technologies for tracking and monitoring individuals and populations to prevent the transmission of the new coronavirus. The technology has the capacity to contribute towards tackling the pandemic effectively, but the success also comes at the expense of privacy rights. The crucial point to make is regardless of who uses and which mechanism, in one way another will infringe personal privacy. Therefore, when considering the use of technologies to combat the pandemic, the focus should also be on the impact of facial recognition cameras, police surveillance drones, and other digital surveillance devices on the privacy rights of those under surveillance. The GDPR was established to ensure that information could be shared without causing any infringement on personal data and businesses; therefore, in generating Big Data, it is important to ensure that the information is securely collected, processed, transmitted, stored, and accessed in accordance with established rules. This paper focuses on Big Data challenges associated with surveillance methods used within the COVID-19 parameters. The aim of this research is to propose practical solutions to Big Data challenges associated with COVID-19 pandemic surveillance approaches. To that end, the researcher will identify the surveillance measures being used by countries in different regions, the sensitivity of generated data, and the issues associated with the collection of large volumes of data and finally propose feasible solutions to protect the privacy rights of the people, during the post-COVID-19 era.


2019 ◽  
Vol 22 (1) ◽  
Author(s):  
Miguel Ehecatl Morales-Trujillo ◽  
Gabriel Alberto García-Mireles ◽  
Erick Orlando Matla-Cruz ◽  
Mario Piattini

Protecting personal data in current software systems is a complex issue that requires legal regulations and constraints to manage personal data as well as a methodological support to develop software systems that would safeguard data privacy of their respective users. Privacy by Design (PbD) approach has been proposed to address this issue and has been applied to systems development in a variety of application domains. The aim of this work is to determine the presence of PbD and its extent in software development efforts. A systematic mapping study was conducted in order to identify relevant literature that collects PbD principles and goals in software development as well as methods and/or practices that support privacy aware software development. 53 selected papers address PbD mostly from a theoretical perspective with proposals validation based primarily on experiences or examples. The findings suggest that there is a need to develop privacy-aware methods to be integrated at all stages of software development life cycle and validate them in industrial settings.


2021 ◽  
Author(s):  
Naveen Kunnathuvalappil Hariharan

As organizations' desire for data grows, so does their search for data sources that are both usable and reliable.Businesses can obtain and collect big data in a variety of locations, both inside and outside their own walls.This study aims to investigate the various data sources for business intelligence. For business intelligence,there are three types of data: internal data, external data, and personal data. Internal data is mostly kept indatabases, which serve as the backbone of an enterprise information system and are known as transactionalsystems or operational systems. This information, however, is not always sufficient. If the company wants toanswer market and industry questions or better understand future customers, the analytics team may need to look beyond the company's own data sources. Organizations must have access to a variety of data sources in order to answer the key questions that guide their initiatives. Internal sources, external public sources, andcollaboration with a big data expert could all be beneficial. Companies who are able to extract relevant datafrom their mountain of data acquire new perspectives on their business, allowing them to become morecompetitive


Author(s):  
Anastasia Kozyreva ◽  
Philipp Lorenz-Spreen ◽  
Ralph Hertwig ◽  
Stephan Lewandowsky ◽  
Stefan M. Herzog

AbstractPeople rely on data-driven AI technologies nearly every time they go online, whether they are shopping, scrolling through news feeds, or looking for entertainment. Yet despite their ubiquity, personalization algorithms and the associated large-scale collection of personal data have largely escaped public scrutiny. Policy makers who wish to introduce regulations that respect people’s attitudes towards privacy and algorithmic personalization on the Internet would greatly benefit from knowing how people perceive personalization and personal data collection. To contribute to an empirical foundation for this knowledge, we surveyed public attitudes towards key aspects of algorithmic personalization and people’s data privacy concerns and behavior using representative online samples in Germany (N = 1065), Great Britain (N = 1092), and the United States (N = 1059). Our findings show that people object to the collection and use of sensitive personal information and to the personalization of political campaigning and, in Germany and Great Britain, to the personalization of news sources. Encouragingly, attitudes are independent of political preferences: People across the political spectrum share the same concerns about their data privacy and show similar levels of acceptance regarding personalized digital services and the use of private data for personalization. We also found an acceptability gap: People are more accepting of personalized services than of the collection of personal data and information required for these services. A large majority of respondents rated, on average, personalized services as more acceptable than the collection of personal information or data. The acceptability gap can be observed at both the aggregate and the individual level. Across countries, between 64% and 75% of respondents showed an acceptability gap. Our findings suggest a need for transparent algorithmic personalization that minimizes use of personal data, respects people’s preferences on personalization, is easy to adjust, and does not extend to political advertising.


2021 ◽  
Vol 00 (00) ◽  
pp. 1-19
Author(s):  
Diah Yuniarti ◽  
Sri Ariyanti

This study aims to provide recommendations to the government on regulating licence, content and data privacy and protection for integrated broadcast-broadband (IBB) operations in Indonesia, by referencing Singapore, Japan and Malaysia as case studies, considering the need for umbrella regulations for IBB implementation. Singapore and Japan were chosen as countries that have deployed IBB since they have been using hybrid broadcast broadband television (HbbTV) and Hybridcast standards, respectively. Malaysia was chosen because it is a neighbouring country that has conducted trials of the IBB service, bundled with its digital terrestrial television (DTT) service. The qualitative data are analysed using a comparative method. The results show that Indonesia needs to immediately revise its existing Broadcasting Law to accommodate DTT implementation, which is the basis for IBB and the expansion of the broadcaster’s TV business. Learning from Singapore, Indonesia could include over-the-top (OTT) content in its ‘Broadcast Behaviour Guidelines’ and ‘Broadcast Programme Standards’. Data privacy and protection requirements for each entity involved in the IBB ecosystem are necessary due to the vulnerability of IBB service user data leakage. In light of this, the ratification of the personal data protection law, as a legal umbrella, needs to be accelerated.


Sign in / Sign up

Export Citation Format

Share Document