Big Data, Cloud and Bring Your Own Device: How the Data Protection Law Addresses the Impact of “Datafication”

2015 ◽  
Vol 21 (10) ◽  
pp. 3346-3350 ◽  
Author(s):  
Sonny Zulhuda ◽  
Ida Madieha Abdul Ghani Azmi ◽  
Nashrul Hakiem

Author(s):  
Fabiana Accardo

The purpose of this article is that to explain the impact of the landmark decision Schrems c. Data Protection Commissioner [Ireland] - delivered on 7 October 2015 (Case C-362/2014 EU) by the Court of Justice - on the European scenario. Starting from a brief analysis of the major outcomes originated from the pronunciation of the Court of Justice, then it tries to study the level of criticality that the Safe Harbor Agreement and the subsequently adequacy Commission decision 2000/520/EC – that has been invalidated with Schrems judgment – have provoked before this pronunciation on the matter of safeguarding personal privacy of european citizens when their personal data are transferred outside the European Union, in particular the reference is at the US context. Moreover it focuses on the most important aspects of the new EU-US agreement called Privacy Shield: it can be really considered the safer solution for data sharing in the light of the closer implementation of the Regulation (EU) 2016/679, which will take the place of the Directive 95 /46/CE on the EU data protection law?



2016 ◽  
Vol 24 (7) ◽  
pp. 1096-1096 ◽  
Author(s):  
Menno Mostert ◽  
Annelien L Bredenoord ◽  
Monique CIH Biesaart ◽  
Johannes JM van Delden


Author(s):  
Sandra Wachter ◽  
Brent Mittelstadt

Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Concerns about algorithmic accountability are often actually concerns about the way in which these technologies draw privacy invasive and non-verifiable inferences about us that we cannot predict, understand, or refute.Data protection law is meant to protect people’s privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The broad concept of personal data in Europe could be interpreted to include inferences, predictions, and assumptions that refer to or impact on an individual. If seen as personal data, individuals are granted numerous rights under data protection law. However, the legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice.As we show in this paper, individuals are granted little control and oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively ‘economy class’ personal data in the General Data Protection Regulation (GDPR). Data subjects’ rights to know about (Art 13-15), rectify (Art 16), delete (Art 17), object to (Art 21), or port (Art 20) personal data are significantly curtailed when it comes to inferences, often requiring a greater balance with controller’s interests (e.g. trade secrets, intellectual property) than would otherwise be the case. Similarly, the GDPR provides insufficient protection against sensitive inferences (Art 9) or remedies to challenge inferences or important decisions based on them (Art 22(3)).This situation is not accidental. In standing jurisprudence the European Court of Justice (ECJ; Bavarian Lager, YS. and M. and S., and Nowak) and the Advocate General (AG; YS. and M. and S. and Nowak) have consistently restricted the remit of data protection law to assessing the legitimacy of input personal data undergoing processing, and to rectify, block, or erase it. Critically, the ECJ has likewise made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent.Conflict looms on the horizon in Europe that will further weaken the protection afforded to data subjects against inferences. Current policy proposals addressing privacy protection (the ePrivacy Regulation and the EU Digital Content Directive) fail to close the GDPR’s accountability gaps concerning inferences. At the same time, the GDPR and Europe’s new Copyright Directive aim to facilitate data mining, knowledge discovery, and Big Data analytics by limiting data subjects’ rights over personal data. And lastly, the new Trades Secrets Directive provides extensive protection of commercial interests attached to the outputs of these processes (e.g. models, algorithms and inferences).In this paper we argue that a new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed ‘high risk inferences’ , meaning inferences that are privacy invasive or reputation damaging and have low verifiability in the sense of being predictive or opinion-based. In cases where algorithms draw ‘high risk inferences’ about individuals, this right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data is a relevant basis to draw inferences; (2) why these inferences are relevant for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged. A right to reasonable inferences must, however, be reconciled with EU jurisprudence and counterbalanced with IP and trade secrets law as well as freedom of expression and Article 16 of the EU Charter of Fundamental Rights: the freedom to conduct a business.



2019 ◽  
Vol 20 (1) ◽  
pp. 257-290 ◽  
Author(s):  
Michael Birnhack

Abstract Data protection law has a linear logic, in that it purports to trace the lifecycle of personal data from creation to collection, processing, transfer, and ultimately its demise, and to regulate each step so as to promote the data subject’s control thereof. Big data defies this linear logic, in that it decontextualizes data from its original environment and conducts an algorithmic nonlinear mix, match, and mine analysis. Applying data protection law to the processing of big data does not work well, to say the least. This Article examines the case of big medical data. A survey of emerging research practices indicates that studies either ignore data protection law altogether or assume an ex post position, namely that because they are conducted after the data has already been created in the course of providing medical care, and they use de-identified data, they go under the radar of data protection law. These studies focus on the end-point of the lifecycle of big data: if sufficiently anonymous at publication, the previous steps are overlooked, on the claim that they enjoy immunity. I argue that this answer is too crude. To portray data protection law in its best light, we should view it as a process-based attempt to equip data subjects with some power to control personal data about them, in all phases of data processing. Such control reflects the underlying justification of data protection law as an implementation of human dignity. The process-based approach fits current legal practices and is justified by reflecting dignitarian conceptions of informational privacy.





2019 ◽  
Author(s):  
Nela Grothe

The continuously increasing relevance of online platforms has led to data being of extreme importance for competition. In the age of big data, data has created effective competition in terms of quality and innovation. At the same time, data power poses risks with regard to competition. As a result, numerous new legal issues have arisen concerning the treatment of such data power in relation to the regulation of antitrust abuse. This study examines the effects of data power on the regulation of antitrust abuse by referring to the German Federal Cartel Office’s (Bundeskartellamt) latest case against Facebook. It examines whether and how aspects of data protection can or should be considered with regard to the monitoring of abuse. It also provides an overview of the risks of data power in relation to antitrust issues and clarifies the relationship between anti-trust law and data protection law.



Author(s):  
Claudio Roberto Pessoa ◽  
Bruna Cardoso Nunes ◽  
Camila de Oliveira ◽  
Marco Elísio Marques

The world scenario is changing when we talk about personal data protection. Not that long ago, it was common to find companies that sell databases, and other companies that work with the information contained into these databases, aimed to create profiles and generate solutions, using technologies such as big data and artificial intelligence, among others, looking to be attractive and get more customers. In order to protect the privacy of citizens across the world, laws have been created and/or expanded to reinforce this protection. In Brazil, specifically, the Lei de Proteção de Dados Pessoais – LGPD [General Data Protection Law] was created. This research aims to analyze this law, as well as other laws that orbit around it. The goal is to know the impact of law enforcement on business routine and, as a specific objective, what the role of DPO (Data Protection Officer) in organizations will be.



Sign in / Sign up

Export Citation Format

Share Document