scholarly journals Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns

2019 ◽  
Vol 6 (1) ◽  
pp. 205395171986054 ◽  
Author(s):  
Heike Felzmann ◽  
Eduard Fosch Villaronga ◽  
Christoph Lutz ◽  
Aurelia Tamò-Larrieux

Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect to this requirement by focusing on the significance of contextual and performative factors in the implementation of transparency. We show that human–computer interaction and human-robot interaction literature do not provide clear results with respect to the benefits of transparency for users of artificial intelligence technologies due to the impact of a wide range of contextual factors, including performative aspects. We conclude by integrating the information- and explanation-based approach to transparency with the critical contextual approach, proposing that transparency as required by the General Data Protection Regulation in itself may be insufficient to achieve the positive goals associated with transparency. Instead, we propose to understand transparency relationally, where information provision is conceptualized as communication between technology providers and users, and where assessments of trustworthiness based on contextual factors mediate the value of transparency communications. This relational concept of transparency points to future research directions for the study of transparency in artificial intelligence systems and should be taken into account in policymaking.

2020 ◽  
Vol 27 (3) ◽  
pp. 195-212
Author(s):  
Jean Herveg ◽  
Annagrazia Altavilla

Abstract This article aims at opening discussions and promoting future research about key elements that should be taken into account when considering new ways to organise access to personal data for scientific research in the perspective of developing innovative medicines. It provides an overview of these key elements: the different ways of accessing data, the theory of the essential facilities, the Regulation on the Free Flow of Non-personal Data, the Directive on Open Data and the re-use of public sector information, and the General Data Protection Regulation (GDPR) rules on accessing personal data for scientific research. In the perspective of fostering research, promoting innovative medicines, and having all the raw data centralised in big databases localised in Europe, we suggest to further investigate the possibility to find acceptable and balanced solutions with complete respect of fundamental rights, as well as for private life and data protection.


2020 ◽  
Vol 89 (4) ◽  
pp. 55-72
Author(s):  
Nermin Varmaz

Summary: This article addresses the compliance of the use of Big Data and Artificial Intelligence (AI) by FinTechs with European data protection principles. FinTechs are increasingly replacing traditional credit institutions and are becoming more important in the provision of financial services, especially by using AI and Big Data. The ability to analyze a large amount of different personal data at high speed can provide insights into customer spending patterns, enable a better understanding of customers, or help predict investments and market changes. However, once personal data is involved, a collision with all basic data protection principles stipulated in the European General Data Protection Regulation (GDPR) arises, mostly due to the fact that Big Data and AI meet their overall objectives by processing vast data that lies beyond their initial processing purposes. The author shows that within this ratio, pseudonymization can prove to be a privacy-compliant and thus preferable alternative for the use of AI and Big Data while still enabling FinTechs to identify customer needs. Zusammenfassung: Dieser Artikel befasst sich mit der Vereinbarkeit der Nutzung von Big Data und Künstlicher Intelligenz (KI) durch FinTechs mit den europäischen Datenschutzgrundsätzen. FinTechs ersetzen zunehmend traditionelle Kreditinstitute und gewinnen bei der Bereitstellung von Finanzdienstleistungen an Bedeutung, insbesondere durch die Nutzung von KI und Big Data. Die Fähigkeit, eine große Menge unterschiedlicher personenbezogener Daten in hoher Geschwindigkeit zu analysieren, kann Einblicke in das Ausgabeverhalten der Kunden geben, ein besseres Verständnis der Kunden ermöglichen oder helfen, Investitionen und Marktveränderungen vorherzusagen. Sobald jedoch personenbezogene Daten involviert sind, kommt es zu einer Kollision mit allen grundlegenden Datenschutzprinzipien, die in der europäischen Datenschutzgrundverordnung (DS-GVO) festgelegt sind, vor allem aufgrund der Tatsache, dass Big Data und KI ihre übergeordneten Ziele durch die Verarbeitung großer Datenmengen erreichen, die über ihre ursprünglichen Verarbeitungszwecke hinausgehen. Der Autor zeigt, dass sich in diesem Verhältnis die Pseudonymisierung als datenschutzkonforme und damit vorzugswürdige Alternative für den Einsatz von KI und Big Data erweisen kann, die FinTechs dennoch in die Lage versetzt, Kundenbedürfnisse zu erkennen.


2019 ◽  
Vol 21 (5) ◽  
pp. 510-524 ◽  
Author(s):  
Nazar Poritskiy ◽  
Flávio Oliveira ◽  
Fernando Almeida

PurposeThe implementation of European data protection is a challenge for businesses and has imposed legal, technical and organizational changes for companies. This study aims to explore the benefits and challenges that companies operating in the information technology (IT) sector have experienced in applying the European data protection. Additionally, this study aims to explore whether the benefits and challenges faced by these companies were different considering their dimension and the state of implementation of the regulation.Design/methodology/approachThis study adopts a quantitative methodology, based on a survey conducted with Portuguese IT companies. The survey is composed of 30 questions divided into three sections, namely, control data; assessment; and benefits and challenges. The survey was created on Google Drive and distributed among Portuguese IT companies between March and April of 2019. The data were analyzed using the Stata software using descriptive and inferential analysis techniques using the ANOVA one-way test.FindingsA total of 286 responses were received. The main benefits identified by the application of European data protection include increased confidence and legal clarification. On the other hand, the main challenges include the execution of audits to systems and processes and the application of the right to erasure. The findings allow us to conclude that the state of implementation of the general data protection regulation (GDPR), and the type of company are discriminating factors in the perception of benefits and challenges.Research limitations/implicationsThis study has essentially practical implications. Based on the synthesis of the benefits and challenges posed by the adoption of European data protection, it is possible to assess the relative importance and impact of the benefits and challenges faced by companies in the IT sector. However, this study does not explore the type of challenges that are placed at each stage of the adoption of European data protection and does not take into account the specificities of the activities carried out by each of these companies.Originality/valueThe implementation of the GDPR is still in an initial phase. This study is pioneering in synthesizing the main benefits and challenges of its adoption considering the companies operating in the IT sector. Furthermore, this study explores the impact of the size of the company and the status of implementation of the GDPR on the perception of the established benefits and challenges.


2019 ◽  
Vol 25 (4) ◽  
pp. 465-481 ◽  
Author(s):  
Adrián Todolí-Signes

Big data, algorithms and artificial intelligence now allow employers to process information on their employees and potential employees in a far more efficient manner and at a much lower cost than in the past. This makes it possible to profile workers automatically and even allows technology itself to replace human resources personnel in making decisions that have legal effects on employees (recruitment, promotion, dismissals, etc.). This entails great risks of worker discrimination and defencelessness, with workers unaware of the reasons underlying any such decision. This article analyses the protections established in the EU General Data Protection Regulation (GDPR) for safeguarding employees against discrimination. One of the main conclusions that can be drawn is that, in the face of the inadequacy of the GDPR in the field of labour relations, there is a need for the collective governance of workplace data protection, requiring the participation of workers’ representatives in establishing safeguards.


2020 ◽  
Author(s):  
Rossana Ducato

This paper aims to assess the information duties set out in the General Data Protection Regulation (GDPR) and national adaptations when the purpose of processing is scientific research. Information about the processing plays a critical role for data subjects in general. However, it becomes even more central in the research context, due to the peculiarities of the legal regime applicable to it. The analysis critically points out that the GDPR’s information obligations are not entirely satisfying and present some flaws. Furthermore, the GDPR information duties risk suffering from the same shortcomings usually addressed in the literature about mandated disclosures. The paper argues that the principle of transparency, developed as a “user-centric” concept, can support the adoption of solutions that embed behavioural insights to support the rationale of the information provision better.


2012 ◽  
Vol 13 (2) ◽  
Author(s):  
Peter Traung

AbstractAmong other things, the proposed General Data Protection Regulation aims at substantially reducing fragmentation, administrative burden and cost and to provide clear rules, simplifying the legal environment. This article argues that considerable work is needed to achieve those goals and that the proposal fails to provide either substantial legal certainty or simplification, that it adds administrative burden while leaving ample risk of fragmentation. In particular, the proposal misses the opportunity of strengthening data protection while achieving substantial simplification through abolishing the controller/ processor distinction and allowing transfers with no reduction of the controller’s liability. Large parts of the proposal depend entirely on clarification through delegated acts issued by the Commission. Prospects for those being adopted look dire. Failing either delegated acts or substantial redrafting, those parts may become dead letter or worse. There is a highly problematic obligation to “demonstrate compliance” with the law. The proportionate alternative to a number of other obligations on controllers, such as to maintain various documentation, appoint data protection officers etc, is to include such obligations as possible behavioural sanctions in case of a proven breach of the law. The proposal also appears to raise issues regarding freedom of movement. The impact assessment largely fails to demonstrate a need and net benefit from the proposed additional obligations. It also appears to severely underestimate the costs of the proposals, partly due to what appears to be arithmetic errors. The proposal does interestingly and rudimentarily put a value on personal data, but the approach could be extended.


Sign in / Sign up

Export Citation Format

Share Document