Artificial Intelligence and the Right to Data Protection

2021 ◽  
Author(s):  
Ralf Poscher
2020 ◽  
Author(s):  
Frederik Zuiderveen Borgesius

Algorithmic decision-making and other types of artificial intelligence (AI) can be used to predict who will commit crime, who will be a good employee, who will default on a loan, etc. However, algorithmic decision-making can also threaten human rights, such as the right to non-discrimination. The paper evaluates current legal protection in Europe against discriminatory algorithmic decisions. The paper shows that non-discrimination law, in particular through the concept of indirect discrimination, prohibits many types of algorithmic discrimination. Data protection law could also help to defend people against discrimination. Proper enforcement of non-discrimination law and data protection law could help to protect people. However, the paper shows that both legal instruments have severe weaknesses when applied to artificial intelligence. The paper suggests how enforcement of current rules can be improved. The paper also explores whether additional rules are needed. The paper argues for sector-specific – rather than general – rules, and outlines an approach to regulate algorithmic decision-making.


2021 ◽  
Vol 13 (13) ◽  
pp. 107-124
Author(s):  
Eduardo Biacchi Gomes ◽  
Andréa Arruda Vaz ◽  
Sandra Mara de Oliveira Dias

This research analyzes how artificial intelligence has been applied by the Judiciary in Brazil. What ethical limits should be established and observed in the implementation of Artificial Intelligence before the Resolutions of the CNJ, n. 331 that established the National Database of the Judiciary – DataJud, n. 332 that provides for ethics, transparency and Governance in the production and use of Artificial Intelligence in the Judiciary and Law 13,709 of 2018 that regulates data protection in Brazil. It is concluded that based on the Ethics on the use of Artificial Intelligence in Judicial Systems (CEPEJ), based on Articles 5, XXXVII and LIII, Article 93, IX of the CF/88, Article 20 of 13,709/2018 (LGPD) and Resolutions 331 and 332/2020 of the CNJ point to the need for human supervision in judicial decisions that use artificial intelligence in observance of the right of explanation and review. There are ethical limits to be observed in the production and use of Artificial Intelligence to avoid the bias and opacity of data that may contaminate judicial decisions from absolute nullity. Deductive method and bibliographic technique are used for the production of this article.


2021 ◽  
pp. 10-19
Author(s):  
Greta Angjeli ◽  
Besmir Premalaj

One of the fundamental human rights protected by various international conventions is the right to the protection of privacy, or as defined in the European Convention on Human Rights, the right to respect private and family life. Affiliated to this right is also the right to data protection, which is described by various authors as a modern derivation of the right to privacy protection. The protection of personal data in the context of privacy protection was jeopardized by the rapid and widespread of information technology, automated data processing and the risk of access to this data by unauthorized persons on the network. The legal regulation for the non-violation of the right to respect private life by the processing of personal data with automated systems was one of the challenges of many states which had to allow the use of artificial intelligence for the benefit of further economic and social development, at the same time they had to ensure the protection of the personal data of their citizens. In this context, the EU has issued another regulation on personal data protection (General Data Protection Regulation (EU) 2016/679). The purpose of this paper is to highlight the impact of artificial intelligence on the right to respect private life and the legal protection of personal data from misuse through artificial intelligence.


Author(s):  
Miguel Ángel CABELLOS ESPIÉRREZ

LABURPENA: Lan eremuan bideozaintzaren erabilerak ondorio garrantzitsuak dakartza funtsezko eskubideei dagokienez, esate baterako intimitateari eta datu pertsonalen babesari dagokienez. Hala eta guztiz ere, oraindik ez daukagu araudi zehatz eta espezifikorik kontrol-teknika hori lan eremuan erabiltzeari buruz. Horrek behartuta, errealitate horri araudi-esparru anitz eta generikoa aplikatzeko modua auzitegiek zehaztu behar dute, kontuan hartuta, gainera, Espainiako Konstituzioaren 18.4 artikulua alde horretatik lausoa dela. Konstituzio Auzitegiak, datuen babeserako funtsezko eskubidea aztertzean, datuen titularraren adostasuna eta titular horri eman beharreko informazioa eskubide horretan berebizikoak zirela ezarri zuen; hortik ondorioztatzen da titularraren adostasuna eta hari emandako informazioa mugatuz gero behar bezala justifikatu beharko dela. Hala ere, Konstituzio Auzitegiak, duela gutxiko jurisprudentzian, bere doktrina aldatu du. Aldaketa horrek, lan eremuan, argi eta garbi langileak informazioa jasotzeko duen eskubidea debaluatzea dakar, bere datuetatik zein lortzen ari diren jakiteari dagokionez. RESUMEN: La utilización de la videovigilancia en el ámbito laboral posee importantes implicaciones en relación con derechos fundamentales como los relativos a la intimidad y a la protección de datos personales. Pese a ello, carecemos aún de una normativa detallada y específica en relación con el uso de dicha técnica de control en el ámbito laboral, lo que obliga a que sean los tribunales los que vayan concretando la aplicación de un marco normativo plural y genérico a esa realidad, dada además la vaguedad del art. 18.4 CE. El TC, al analizar el derecho fundamental a la protección de datos, había establecido el carácter central en él del consentimiento del titular de los datos y de la información que debe dársele a éste, de donde se sigue que cualquier limitación del papel de ambos deberá estar debidamente justificada. Sin embargo, en su más reciente jurisprudencia el TC ha realizado un cambio de doctrina que supone, en el ámbito laboral, una clara devaluación del derecho a la información por parte del trabajador en relación con qué datos suyos se están obteniendo. ABSTRACT : T he use of video surveillance systems within the work sphere has major implications for fundamental rights such as privacy and data protection. Nonetheless, we still lack of a detailed and specific regulation regarding the use of that control technology within the work sphere, which obliges courts to define the application of a plural and generic normative framework to that issue, given the vagueness of art. 18.4 of the Constitution. The Constitutional Court, when analyzing the fundamental right to data protection, had settled the centralityof the consent of the data rightholder and of the information to be provided to the latter, and from this it followed that any restriction on the role of both rights should be duly justified. However, in its most recent case law the Constitutional Court has changed its doctrine which means, within the work sphere, a clear devaluation of the right of information by the employee regarding the obtained data of him/her.


2017 ◽  
Vol 2 (Suppl. 1) ◽  
pp. 1-10
Author(s):  
Denis Horgan

In the fast-moving arena of modern healthcare with its cutting-edge science it is already, and will become more, vital that stakeholders collaborate openly and effectively. Transparency, especially on drug pricing, is of paramount importance. There is also a need to ensure that regulations and legislation covering, for example the new, smaller clinical trials required to make personalised medicine work effectively, and the huge practical and ethical issues surrounding Big Data and data protection, are common, understood and enforced across the EU. With more integration, collaboration, dialogue and increased trust among each and every one in the field, stakeholders can help mould the right frameworks, in the right place, at the right time. Once achieved, this will allow us all to work more quickly and more effectively towards creating a healthier - and thus wealthier - European Union.


2017 ◽  
Vol 26 (3) ◽  
pp. 433-437
Author(s):  
Mark Dougherty

AbstractForgetting is an oft-forgotten art. Many artificial intelligence (AI) systems deliver good performance when first implemented; however, as the contextual environment changes, they become out of date and their performance degrades. Learning new knowledge is part of the solution, but forgetting outdated facts and information is a vital part of the process of renewal. However, forgetting proves to be a surprisingly difficult concept to either understand or implement. Much of AI is based on analogies with natural systems, and although all of us have plenty of experiences with having forgotten something, as yet we have only an incomplete picture of how this process occurs in the brain. A recent judgment by the European Court concerns the “right to be forgotten” by web index services such as Google. This has made debate and research into the concept of forgetting very urgent. Given the rapid growth in requests for pages to be forgotten, it is clear that the process will have to be automated and that intelligent systems of forgetting are required in order to meet this challenge.


Sign in / Sign up

Export Citation Format

Share Document