OP117 Digital Real-World Evidence In Times Of General Data Protection Regulation

Author(s):  
Rhodri Saunders ◽  
Rafael Torrejon Torres ◽  
Maximilian Blüher

IntroductionReal-world evidence (RWE) is a useful supplement to a product's evidence base especially for medical devices, which are often unsuitable for randomized controlled trials. Generally, RWE is analyzed retrospectively (for example, healthcare records), which lack granularity for health-economic analysis. Prospective collection of RWE in hospitals can promote device-specific endpoint assessment. The advent of the General Data Protection Regulation (GDPR) requires a privacy-by-design approach. This work describes a workflow for a GDPR-compliant device-specific RWE collection as part of quality improvement initiatives (QII).MethodsA literature review identifies relevant clinical and quality markers as endpoints to the investigated technology. A panel of experts grade these endpoints on their clinical significance, privacy sensitivity, analytic value, and feasibility for collection. Endpoints meeting a predefined cut-off are considered quality markers for the QII. Finally, an RWE data collection app is designed to collect the quality markers using either longitudinal, pseudonymized data or single time-point anonymized data to ensure data protection by design.ResultsUsing this approach relevant clinical markers were identified in a GDPR-compliant manner. The data collection app design ensured that patient data were protected, while maintaining minimum requirements on patient information and consent. The pilot QII collected data on over 5,000 procedures, which represents the largest single data set available for the tested technology. Due to its prospective nature this programme was the first to collect patient outcomes in sufficient quantity for analysis, while previous studies only recorded adverse events.ConclusionsGDPR and RWE can co-exist in harmony. A design approach, which has data protection in mind from the start can combine high quality RWE collection of efficacy and safety data with maximum patient privacy.

2018 ◽  
Vol 39 (4) ◽  
pp. 543-564
Author(s):  
Ece Özlem Atikcan ◽  
Adam William Chalmers

AbstractDespite the impressive amount of empirical research on lobbying, a fundamental question remains overlooked. How do interest groups choose to lobby different sides of an issue? We argue that how groups choose sides is a function of firm-level economic activity. By studying a highly salient regulatory issue, the European Union’s General Data Protection Regulation (GDPR), and using a novel data set of lobbying activities, we reveal that a group’s main economic sector matters most. Firms operating in finance and retail face unique costs and are incentivised to lobby against the GDPR. However, these groups are outgunned by a large, heterogeneous group of firms with superior lobbying firepower on the other side of the issue.


2020 ◽  
Vol 2 (1-2) ◽  
pp. 47-55 ◽  
Author(s):  
Annalisa Landi ◽  
Mark Thompson ◽  
Viviana Giannuzzi ◽  
Fedele Bonifazi ◽  
Ignasi Labastida ◽  
...  

In order to provide responsible access to health data by reconciling benefits of data sharing with privacy rights and ethical and regulatory requirements, Findable, Accessible, Interoperable and Reusable (FAIR) metadata should be developed. According to the H2020 Program Guidelines on FAIR Data, data should be “as open as possible and as closed as necessary”, “open” in order to foster the reusability and to accelerate research, but at the same time they should be “closed” to safeguard the privacy of the subjects. Additional provisions on the protection of natural persons with regard to the processing of personal data have been endorsed by the European General Data Protection Regulation (GDPR), Reg (EU) 2016/679, that came into force in May 2018. This work aims to solve accessibility problems related to the protection of personal data in the digital era and to achieve a responsible access to and responsible use of health data. We strongly suggest associating each data set with FAIR metadata describing both the type of data collected and the accessibility conditions by considering data protection obligations and ethical and regulatory requirements. Finally, an existing FAIR infrastructure component has been used as an example to explain how FAIR metadata could facilitate data sharing while ensuring protection of individuals.


2019 ◽  
Vol 34 (s1) ◽  
pp. s138-s138
Author(s):  
Annelies Scholliers ◽  
Dimitri De Fré ◽  
Inge D’haese ◽  
Stefan Gogaert

Introduction:As of May 2018, a new European privacy law called the General Data Protection Regulation (GDPR) is in order. With this law, every organization operating in the European Union (EU), needs to adhere to a strict set of rules concerning collection and processing of personal data.Aim:To explore the consequences of the GDPR for data collection at mass gatherings in the European Union.Methods:Since the law was published on April 27, 2016, a thorough reading of the law was conducted by 4 persons with a background in mass gathering health. The GDPR consists of 99 articles organized into 11 chapters. There are also 173 recitals to further explain certain ambiguities. Key articles and recitals relating to healthcare and scientific research were identified. Possible pitfalls and opportunities for data collection and processing at mass gatherings were noted.Discussion:Under article 4, key definitions are noted. There is a clear definition of “data concerning health”. According to the GDPR, health data is a special category of personal data which should not be processed according to article 9(1). However, there is an exception for scientific research (article 9(2)(j)). There are a few safeguards in place, as laid out in article 89. One interesting point is that according to article 89(2), certain derogations can take place if the law interferes with scientific research. The GDPR has major consequences for data collection and processing in the EU. However, with the use of certain safeguards (e.g., pseudonymization) there are still ample opportunities for scientific research. It is important to review one’s method of data collection to make sure it complies with the GDPR.


2019 ◽  
Author(s):  
Peter Kieseberg ◽  
Lukas Daniel Klausner ◽  
Andreas Holzinger

In discussions on the General Data Protection Regulation (GDPR), anonymisation and deletion are frequently mentioned as suitable technical and organisational methods (TOMs) for privacy protection. The major problem of distortion in machine learning environments, as well as related issues with respect to privacy, are rarely mentioned. The Big Data Analytics project addresses these issues.


2021 ◽  
pp. 414-414
Author(s):  
Eleonora Rosati

This chapter looks at the provisions of Article 28 of Directive 2019/790, the European copyright directive in the Digital Single Market. It outlines the processing of personal data that is carried out within the framework of Directive 2019/790 and in compliance with Directive 2002/58/EC and Regulation (EU) 2016/679. It also demonstrates the processing of personal data that should respect fundamental rights, including the right to the protection of personal data. The chapter clarifies that the processing of personal data must be done in respect of the fundamental rights to private and family life and protection of personal data set out in Articles 7 and 8 of the EU Charter of Fundamental Rights. It discusses the protection of personal data pursuant to ePrivacy Directive 2002/58 and General Data Protection Regulation 2016/679.


2017 ◽  
Vol 34 (2) ◽  
pp. 96-100 ◽  
Author(s):  
Keith Dewar

Data is highly valuable to an organisation and in a data-driven economy a company’s ability to analyse and extract insights from data, to find patterns and trends and new revenue streams, dictates strategies and competitive advantage. Running parallel to this is a growing resistance from individuals towards the organisations who hold their data. Their mistrust in organisations is increasing, partly because many have displayed a severe lack of transparency about how they use and store individuals’ data. A few organisations have even misused individuals’ data to a point where all trust is destroyed. It’s led to a growing number of individuals reducing the information they share, and, in some cases, entirely blocking its use. In May 2018, the EU General Data Protection Regulation (GDPR) will require all organisations to become compliant with new regulations on data collection and use. This article examines the commoditised nature of data and how organisations need to build greater trust and consent into their data protection policies.


Sign in / Sign up

Export Citation Format

Share Document