scholarly journals Dynamic Privacy-Preserving Recommendations on Academic Graph Data

Computers ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 107
Author(s):  
Erasmo Purificato ◽  
Sabine Wehnert ◽  
Ernesto William De Luca

In the age of digital information, where the internet and social networks, as well as personalised systems, have become an integral part of everyone’s life, it is often challenging to be aware of the amount of data produced daily and, unfortunately, of the potential risks caused by the indiscriminate sharing of personal data. Recently, attention to privacy has grown thanks to the introduction of specific regulations such as the European GDPR. In some fields, including recommender systems, this has inevitably led to a decrease in the amount of usable data, and, occasionally, to significant degradation in performance mainly due to information no longer being attributable to specific individuals. In this article, we present a dynamic privacy-preserving approach for recommendations in an academic context. We aim to implement a personalised system capable of protecting personal data while at the same time allowing sensible and meaningful use of the available data. The proposed approach introduces several pseudonymisation procedures based on the design goals described by the European Union Agency for Cybersecurity in their guidelines, in order to dynamically transform entities (e.g., persons) and attributes (e.g., authored papers and research interests) in such a way that any user processing the data are not able to identify individuals. We present a case study using data from researchers of the Georg Eckert Institute for International Textbook Research (Brunswick, Germany). Building a knowledge graph and exploiting a Neo4j database for data management, we first generate several pseudoN-graphs, being graphs with different rates of pseudonymised persons. Then, we evaluate our approach by leveraging the graph embedding algorithm node2vec to produce recommendations through node relatedness. The recommendations provided by the graphs in different privacy-preserving scenarios are compared with those provided by the fully non-pseudonymised graph, considered as the baseline of our evaluation. The experimental results show that, despite the structural modifications to the knowledge graph structure due to the de-identification processes, applying the approach proposed in this article allows for preserving significant performance values in terms of precision.

2020 ◽  
pp. 29-39
Author(s):  
Ineta Breskienė

This article analyses the current situation in the European Union related to the free movement of data, relationship between personal data, non – personal data and their use in artificial intelligence technology. Despite the European Union’s efforts to facilitate the free movement of data, some relevant obstacles are currently being observed. Artificial intelligence technology faces difficulties in using data. Despite the fact that large amounts of data are now increasingly accessible to such technology, its ability to de-anonymize data poses risks of turning simple data into personal data and making its use a challenge for artificial intelligence developers. The issues raised are sensitive and some regulatory changes should be made in the near future in order for the European Union to remain a leader in emerging technologies.


2021 ◽  
Vol 13 (3) ◽  
pp. 66
Author(s):  
Dimitra Georgiou ◽  
Costas Lambrinoudakis

The General Data Protection Regulation (GDPR) harmonizes personal data protection laws across the European Union, affecting all sectors including the healthcare industry. For processing operations that pose a high risk for data subjects, a Data Protection Impact Assessment (DPIA) is mandatory from May 2018. Taking into account the criticality of the process and the importance of its results, for the protection of the patients’ health data, as well as the complexity involved and the lack of past experience in applying such methodologies in healthcare environments, this paper presents the main steps of a DPIA study and provides guidelines on how to carry them out effectively. To this respect, the Privacy Impact Assessment, Commission Nationale de l’Informatique et des Libertés (PIA-CNIL) methodology has been employed, which is also compliant with the privacy impact assessment tasks described in ISO/IEC 29134:2017. The work presented in this paper focuses on the first two steps of the DPIA methodology and more specifically on the identification of the Purposes of Processing and of the data categories involved in each of them, as well as on the evaluation of the organization’s GDPR compliance level and of the gaps (Gap Analysis) that must be filled-in. The main contribution of this work is the identification of the main organizational and legal requirements that must be fulfilled by the health care organization. This research sets the legal grounds for data processing, according to the GDPR and is highly relevant to any processing of personal data, as it helps to structure the process, as well as be aware of data protection issues and the relevant legislation.


2018 ◽  
Vol 26 (2) ◽  
pp. 1-26 ◽  
Author(s):  
Frederico Cruz-Jesus ◽  
Tiago Oliveira ◽  
Fernando Bacao

This article presents an analysis of the global digital divide, based on data collected from 45 countries, including the ones belonging to the European Union, OECD, Brazil, Russia, India, and China (BRIC). The analysis shows that one factor can explain a large part of the variation in the seven ICT variables used to measure the digital development of countries. This measure is then used with additional variables, which are hypothesised as drivers of the divide for a regression analysis using data from 2015, 2013, and 2011, which reveals economic and educational imbalances between countries, along with some aspects of geography, as drivers of the digital divide. Contrary to the authors' expectations, the English language is not a driver.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 115717-115727
Author(s):  
Bin Yu ◽  
Chenyu Zhou ◽  
Chen Zhang ◽  
Guodong Wang ◽  
Yiming Fan

Hypertension ◽  
2021 ◽  
Vol 77 (4) ◽  
pp. 1029-1035
Author(s):  
Antonia Vlahou ◽  
Dara Hallinan ◽  
Rolf Apweiler ◽  
Angel Argiles ◽  
Joachim Beige ◽  
...  

The General Data Protection Regulation (GDPR) became binding law in the European Union Member States in 2018, as a step toward harmonizing personal data protection legislation in the European Union. The Regulation governs almost all types of personal data processing, hence, also, those pertaining to biomedical research. The purpose of this article is to highlight the main practical issues related to data and biological sample sharing that biomedical researchers face regularly, and to specify how these are addressed in the context of GDPR, after consulting with ethics/legal experts. We identify areas in which clarifications of the GDPR are needed, particularly those related to consent requirements by study participants. Amendments should target the following: (1) restricting exceptions based on national laws and increasing harmonization, (2) confirming the concept of broad consent, and (3) defining a roadmap for secondary use of data. These changes will be achieved by acknowledged learned societies in the field taking the lead in preparing a document giving guidance for the optimal interpretation of the GDPR, which will be finalized following a period of commenting by a broad multistakeholder audience. In parallel, promoting engagement and education of the public in the relevant issues (such as different consent types or residual risk for re-identification), on both local/national and international levels, is considered critical for advancement. We hope that this article will open this broad discussion involving all major stakeholders, toward optimizing the GDPR and allowing a harmonized transnational research approach.


Sign in / Sign up

Export Citation Format

Share Document