scholarly journals Big Tech platforms in health research: Re-purposing big data governance in light of the General Data Protection Regulation’s research exemption

2021 ◽  
Vol 8 (1) ◽  
pp. 205395172110187
Author(s):  
Luca Marelli ◽  
Giuseppe Testa ◽  
Ine van Hoyweghen

The emergence of a global industry of digital health platforms operated by Big Tech corporations, and its growing entanglements with academic and pharmaceutical research networks, raise pressing questions on the capacity of current data governance models, regulatory and legal frameworks to safeguard the sustainability of the health research ecosystem. In this article, we direct our attention toward the challenges faced by the European General Data Protection Regulation in regulating the potentially disruptive engagement of Big Tech platforms in health research. The General Data Protection Regulation upholds a rather flexible regime for scientific research through a number of derogations to otherwise stricter data protection requirements, while providing a very broad interpretation of the notion of “scientific research”. Precisely the breadth of these exemptions combined with the ample scope of this notion could provide unintended leeway to the health data processing activities of Big Tech platforms, which have not been immune from carrying out privacy-infringing and socially disruptive practices in the health domain. We thus discuss further finer-grained demarcations to be traced within the broadly construed notion of scientific research, geared to implementing use-based data governance frameworks that distinguish health research activities that should benefit from a facilitated data protection regime from those that should not. We conclude that a “re-purposing” of big data governance approaches in health research is needed if European nations are to promote research activities within a framework of high safeguards for both individual citizens and society.

2020 ◽  
Author(s):  
Stuart McLennan ◽  
Leo Anthony Celi ◽  
Alena Buyx

UNSTRUCTURED The coronavirus disease (COVID-19) pandemic is very much a global health issue and requires collaborative, international health research efforts to address it. A valuable source of information for researchers is the large amount of digital health data that are continuously collected by electronic health record systems at health care organizations. The European Union’s General Data Protection Regulation (GDPR) will be the key legal framework with regard to using and sharing European digital health data for research purposes. However, concerns persist that the GDPR has made many organizations very risk-averse in terms of data sharing, even if the regulation permits such sharing. Health care organizations focusing on individual risk minimization threaten to undermine COVID-19 research efforts. In our opinion, there is an ethical obligation to use the research exemption clause of the GDPR during the COVID-19 pandemic to support global collaborative health research efforts. Solidarity is a European value, and here is a chance to exemplify it by using the GDPR regulatory framework in a way that does not hinder but actually fosters solidarity during the COVID-19 pandemic.


2019 ◽  
Vol 6 (2) ◽  
pp. 205395171986259 ◽  
Author(s):  
Johannes Starkbaum ◽  
Ulrike Felt

Before the EU General Data Protection Regulation entered into force in May 2018, we witnessed an intense struggle of actors associated with data-dependent fields of science, in particular health-related academia and biobanks striving for legal derogations for data reuse in research. These actors engaged in a similar line of argument and formed issue alliances to pool their collective power. Using descriptive coding followed by an interpretive analysis, this article investigates the argumentative repertoire of these actors and embeds the analysis in ethical debates on data sharing and biobank-related data governance. We observe efforts to perform a paradigmatic shift of the discourse around the General Data Protection Regulation-implementation away from ‘protecting data’ as key concern to ‘protecting health’ of individuals and societies at large. Instead of data protection, the key risks stressed by health researchers became potential obstacles to research. In line, exchange of information with data subjects is not a key concern in the arguments of biobank-related actors and it is assumed that patients want ‘their’ data to be used. We interpret these narratives as a ‘reaction’ to potential restrictions for data reuse and in line with a broader trend towards Big Data science, as the very idea of biobanking is conceptualized around long-term use of readily prepared data. We conclude that a sustainable implementation of biobanks needs not only to comply with the General Data Protection Regulation, but must proactively re-imagine its relation to citizens and data subjects in order to account for the various ways that science gets entangled with society.


10.2196/19279 ◽  
2020 ◽  
Vol 6 (2) ◽  
pp. e19279 ◽  
Author(s):  
Stuart McLennan ◽  
Leo Anthony Celi ◽  
Alena Buyx

The coronavirus disease (COVID-19) pandemic is very much a global health issue and requires collaborative, international health research efforts to address it. A valuable source of information for researchers is the large amount of digital health data that are continuously collected by electronic health record systems at health care organizations. The European Union’s General Data Protection Regulation (GDPR) will be the key legal framework with regard to using and sharing European digital health data for research purposes. However, concerns persist that the GDPR has made many organizations very risk-averse in terms of data sharing, even if the regulation permits such sharing. Health care organizations focusing on individual risk minimization threaten to undermine COVID-19 research efforts. In our opinion, there is an ethical obligation to use the research exemption clause of the GDPR during the COVID-19 pandemic to support global collaborative health research efforts. Solidarity is a European value, and here is a chance to exemplify it by using the GDPR regulatory framework in a way that does not hinder but actually fosters solidarity during the COVID-19 pandemic.


2021 ◽  
Vol 37 (S1) ◽  
pp. 10-11
Author(s):  
Amanda Cole ◽  
Adrian Towse

IntroductionThe expansion of health data offers exciting opportunities to support better and more efficient drug discovery, development and implementation. Data protection and governance provide the legal framework to balance safeguarding patients’ privacy with the benefits to society of medical research. Our aim is to highlight current legal barriers to the better use of health data and propose ways to address them.MethodsAnalysis of the relevant legislative texts was supplemented by interviews with external experts in data protection, health research, informatics and cyber security and a workshop with pharmaceutical industry members. We investigated the legal issues arising for six key activities along the pharmaceutical lifecycle, from identifying unmet need through to health technology assessment and pharmacovigilance.ResultsThe General Data Protection Regulation (GDPR) was introduced in May 2018 to Harmonise data protection across Europe. However, considerable ambiguity remains, particularly around the appropriate legal bases for data processing in the absence of consent: scientific research, public interest, or provision of health or social care. Other key themes included data subject rights, anonymization, compatibility of primary and secondary (re-)use of data, heterogeneity arising from divergent interpretation, the need for guidance on digital health, and the importance of trust.ConclusionsWe speculate which legal bases are most appropriate for the six pharmaceutical activities studied, but clear guidance and consensus is required. The GDPR was not designed to hamper scientific research, and the issues identified arose from uncertainties rather than barriers per se. Industry and academic researchers should therefore deal proactively with the prevailing uncertainties, share good practice, and engender trust by co-creating a code of conduct and outlining principles of responsible use. Engagement with patients will be critical in encouraging a shared understanding of the value to society of health data for research.


2021 ◽  
Vol 12 ◽  
Author(s):  
Michael J. S. Beauvais ◽  
Bartha Maria Knoppers

The COVID-19 pandemic has underscored the need for new ways of thinking about data protection. This is especially so in the case of health research with children. The responsible use of children’s data plays a key role in promoting children’s well-being and securing their right to health and to privacy. In this article, we contend that a contextual approach that appropriately balances children’s legal and moral rights and interests is needed when thinking about data protection issues with children. We examine three issues in health research through a child-focused lens: consent to data processing, data retention, and data protection impact assessments. We show that these issues present distinctive concerns for children and that the General Data Protection Regulation provides few bright-line rules. We contend that there is an opportunity for creative approaches to children’s data protection when child-specific principles, such as the best interests of the child and the child’s right to be heard, are put into dialogue with the structure and logic of data protection law.


2020 ◽  
Vol 48 (S1) ◽  
pp. 187-195
Author(s):  
Edward S. Dove ◽  
Jiahong Chen

In this article, we consider the possible application of the European General Data Protection Regulation (GDPR) to “citizen scientist”-led health research with mobile devices. We argue that the GDPR likely does cover this activity, depending on the specific context and the territorial scope. Remaining open questions that result from our analysis lead us to call for lex specialis that would provide greater clarity and certainty regarding the processing of health data by for research purposes, including these non-traditional researchers.


2020 ◽  
Vol 27 (3) ◽  
pp. 195-212
Author(s):  
Jean Herveg ◽  
Annagrazia Altavilla

Abstract This article aims at opening discussions and promoting future research about key elements that should be taken into account when considering new ways to organise access to personal data for scientific research in the perspective of developing innovative medicines. It provides an overview of these key elements: the different ways of accessing data, the theory of the essential facilities, the Regulation on the Free Flow of Non-personal Data, the Directive on Open Data and the re-use of public sector information, and the General Data Protection Regulation (GDPR) rules on accessing personal data for scientific research. In the perspective of fostering research, promoting innovative medicines, and having all the raw data centralised in big databases localised in Europe, we suggest to further investigate the possibility to find acceptable and balanced solutions with complete respect of fundamental rights, as well as for private life and data protection.


2020 ◽  
Vol 19 (3) ◽  
pp. 705-725
Author(s):  
Marcelo Guerra Martins ◽  
Leonardo Felipe de Melo Ribeiro Gomes Jorgetto ◽  
Alessandra Cristina Arantes Sutti

O presente trabalho analisa o quadro normativo da proteção ao direito constitucional à privacidade e sua possível violação em relação ao uso do Big Data, descrevendo-se as características dessa tecnologia. O tema é examinado com base no direito europeu (o General Data Protection Regulation, adotado pelo Parlamento Europeu em abril de 2016), e no direito brasileiro, com incursões pelo Marco Civil da Internet (lei 12.965/2014) e pela recente Lei Geral de Proteção de Dados (lei 13.709/2018) que, dentre diversas medidas, criou a Autoridade Nacional de Proteção de Dados, criou diversas obrigações a empresas que colhem dados pessoais em ambientes virtuais, estabelecer direitos aos titulares desses dados, bem como implantou um sistema de responsabilidade administrativa e judicial específico para a área.


2020 ◽  
Vol 12 (1) ◽  
pp. 225-245
Author(s):  
Célia Zolynski

Objective ”“ The article contrasts the problem of Big Data with the possibilities and limits of personal data protection. It is an original contribution to the academic discussion about the regulation of the Internet and the management of algorithms, focusing on Big Data. Methodology/approach/design ”“ The article provides bibliographic research on the opposition between Big Data and personal data protection, focusing on European Union law and French law. From the research is possible to identify regulatory alternatives do Big Data, whether legal-administrative nature or technological nature. Findings ”“ The article enlightens that, in addition to the traditional regulatory options, based on the law, there are technological options for regulating Big Data and algorithms. The article goes through an analysis of administrative performance, such as France’s CNIL (Commission nationale informatique et libertés, CNIL), to show that it has limits. Thus, the article concludes that there is a need to build a new type of regulation, one that is open to the inputs of regulated parties and civil society, in the form of new co-regulatory arrangements. Practical implications ”“ The article has an obvious application since the production of legal solutions for Internet regulation requires combining them with technological solutions. Brazil and several Latin American countries are experiencing this agenda, as they are building institutions and solutions to solve the dilemma of personal data protection. Originality/value ”“ The article clarifies several parts of the General Data Protection Regulation (EU Regulation 2016/679) and its applicability to Big Data. These new types of data processing impose several legal and regulatory challenges, whose solutions cannot be trivial and will rely on new theories and practices.


2020 ◽  
Vol 89 (4) ◽  
pp. 55-72
Author(s):  
Nermin Varmaz

Summary: This article addresses the compliance of the use of Big Data and Artificial Intelligence (AI) by FinTechs with European data protection principles. FinTechs are increasingly replacing traditional credit institutions and are becoming more important in the provision of financial services, especially by using AI and Big Data. The ability to analyze a large amount of different personal data at high speed can provide insights into customer spending patterns, enable a better understanding of customers, or help predict investments and market changes. However, once personal data is involved, a collision with all basic data protection principles stipulated in the European General Data Protection Regulation (GDPR) arises, mostly due to the fact that Big Data and AI meet their overall objectives by processing vast data that lies beyond their initial processing purposes. The author shows that within this ratio, pseudonymization can prove to be a privacy-compliant and thus preferable alternative for the use of AI and Big Data while still enabling FinTechs to identify customer needs. Zusammenfassung: Dieser Artikel befasst sich mit der Vereinbarkeit der Nutzung von Big Data und Künstlicher Intelligenz (KI) durch FinTechs mit den europäischen Datenschutzgrundsätzen. FinTechs ersetzen zunehmend traditionelle Kreditinstitute und gewinnen bei der Bereitstellung von Finanzdienstleistungen an Bedeutung, insbesondere durch die Nutzung von KI und Big Data. Die Fähigkeit, eine große Menge unterschiedlicher personenbezogener Daten in hoher Geschwindigkeit zu analysieren, kann Einblicke in das Ausgabeverhalten der Kunden geben, ein besseres Verständnis der Kunden ermöglichen oder helfen, Investitionen und Marktveränderungen vorherzusagen. Sobald jedoch personenbezogene Daten involviert sind, kommt es zu einer Kollision mit allen grundlegenden Datenschutzprinzipien, die in der europäischen Datenschutzgrundverordnung (DS-GVO) festgelegt sind, vor allem aufgrund der Tatsache, dass Big Data und KI ihre übergeordneten Ziele durch die Verarbeitung großer Datenmengen erreichen, die über ihre ursprünglichen Verarbeitungszwecke hinausgehen. Der Autor zeigt, dass sich in diesem Verhältnis die Pseudonymisierung als datenschutzkonforme und damit vorzugswürdige Alternative für den Einsatz von KI und Big Data erweisen kann, die FinTechs dennoch in die Lage versetzt, Kundenbedürfnisse zu erkennen.


Sign in / Sign up

Export Citation Format

Share Document