scholarly journals Erratum: Big Data in medical research and EU data protection law: challenges to the consent or anonymise approach

2016 ◽  
Vol 24 (7) ◽  
pp. 1096-1096 ◽  
Author(s):  
Menno Mostert ◽  
Annelien L Bredenoord ◽  
Monique CIH Biesaart ◽  
Johannes JM van Delden

2015 ◽  
Vol 24 (7) ◽  
pp. 956-960 ◽  
Author(s):  
Menno Mostert ◽  
Annelien L Bredenoord ◽  
Monique C I H Biesaart ◽  
Johannes J M van Delden


Author(s):  
G. T. Laurie ◽  
S. H. E. Harmon ◽  
E. S. Dove

This chapter discusses ethical and legal aspects of medical confidentiality. It covers the relationship between confidentiality and data protection law; the possible exceptions to the confidentiality rule; confidentiality and the legal process; confidentiality for the purposes of medical research; patient access to medical records; remedies for breach of confidentiality; and confidentiality and death.



2015 ◽  
Vol 21 (10) ◽  
pp. 3346-3350 ◽  
Author(s):  
Sonny Zulhuda ◽  
Ida Madieha Abdul Ghani Azmi ◽  
Nashrul Hakiem


Author(s):  
Sandra Wachter ◽  
Brent Mittelstadt

Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Concerns about algorithmic accountability are often actually concerns about the way in which these technologies draw privacy invasive and non-verifiable inferences about us that we cannot predict, understand, or refute.Data protection law is meant to protect people’s privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The broad concept of personal data in Europe could be interpreted to include inferences, predictions, and assumptions that refer to or impact on an individual. If seen as personal data, individuals are granted numerous rights under data protection law. However, the legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice.As we show in this paper, individuals are granted little control and oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively ‘economy class’ personal data in the General Data Protection Regulation (GDPR). Data subjects’ rights to know about (Art 13-15), rectify (Art 16), delete (Art 17), object to (Art 21), or port (Art 20) personal data are significantly curtailed when it comes to inferences, often requiring a greater balance with controller’s interests (e.g. trade secrets, intellectual property) than would otherwise be the case. Similarly, the GDPR provides insufficient protection against sensitive inferences (Art 9) or remedies to challenge inferences or important decisions based on them (Art 22(3)).This situation is not accidental. In standing jurisprudence the European Court of Justice (ECJ; Bavarian Lager, YS. and M. and S., and Nowak) and the Advocate General (AG; YS. and M. and S. and Nowak) have consistently restricted the remit of data protection law to assessing the legitimacy of input personal data undergoing processing, and to rectify, block, or erase it. Critically, the ECJ has likewise made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent.Conflict looms on the horizon in Europe that will further weaken the protection afforded to data subjects against inferences. Current policy proposals addressing privacy protection (the ePrivacy Regulation and the EU Digital Content Directive) fail to close the GDPR’s accountability gaps concerning inferences. At the same time, the GDPR and Europe’s new Copyright Directive aim to facilitate data mining, knowledge discovery, and Big Data analytics by limiting data subjects’ rights over personal data. And lastly, the new Trades Secrets Directive provides extensive protection of commercial interests attached to the outputs of these processes (e.g. models, algorithms and inferences).In this paper we argue that a new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed ‘high risk inferences’ , meaning inferences that are privacy invasive or reputation damaging and have low verifiability in the sense of being predictive or opinion-based. In cases where algorithms draw ‘high risk inferences’ about individuals, this right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data is a relevant basis to draw inferences; (2) why these inferences are relevant for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged. A right to reasonable inferences must, however, be reconciled with EU jurisprudence and counterbalanced with IP and trade secrets law as well as freedom of expression and Article 16 of the EU Charter of Fundamental Rights: the freedom to conduct a business.



2019 ◽  
Vol 20 (1) ◽  
pp. 257-290 ◽  
Author(s):  
Michael Birnhack

Abstract Data protection law has a linear logic, in that it purports to trace the lifecycle of personal data from creation to collection, processing, transfer, and ultimately its demise, and to regulate each step so as to promote the data subject’s control thereof. Big data defies this linear logic, in that it decontextualizes data from its original environment and conducts an algorithmic nonlinear mix, match, and mine analysis. Applying data protection law to the processing of big data does not work well, to say the least. This Article examines the case of big medical data. A survey of emerging research practices indicates that studies either ignore data protection law altogether or assume an ex post position, namely that because they are conducted after the data has already been created in the course of providing medical care, and they use de-identified data, they go under the radar of data protection law. These studies focus on the end-point of the lifecycle of big data: if sufficiently anonymous at publication, the previous steps are overlooked, on the claim that they enjoy immunity. I argue that this answer is too crude. To portray data protection law in its best light, we should view it as a process-based attempt to equip data subjects with some power to control personal data about them, in all phases of data processing. Such control reflects the underlying justification of data protection law as an implementation of human dignity. The process-based approach fits current legal practices and is justified by reflecting dignitarian conceptions of informational privacy.



Sign in / Sign up

Export Citation Format

Share Document