The regulatory intersections between artificial intelligence, data protection and cyber security: challenges and opportunities for the EU legal framework

AI & Society ◽  
2021 ◽  
Author(s):  
Jozef Andraško ◽  
Matúš Mesarčík ◽  
Ondrej Hamuľák
2014 ◽  
Vol 2 (2) ◽  
pp. 55 ◽  
Author(s):  
Christopher Kuner

The European Union (EU) has supported the growing calls for the creation of an international legal framework to safeguard data protection rights. At the same time, it has worked to spread its data protection law to other regions, and recent judgments of the Court of Justice of the European Union (CJEU) have reaffirmed the autonomous nature of EU law and the primacy of EU fundamental rights law. The tension between initiatives to create a global data protection framework and the assertion of EU data protection law raises questions about how the EU can best promote data protection on a global level, and about the EU’s responsibilities to third countries that have adopted its system of data protection.


2019 ◽  
Vol 5 (2) ◽  
pp. 75-91
Author(s):  
Alexandre Veronese ◽  
Alessandra Silveira ◽  
Amanda Nunes Lopes Espiñeira Lemos

The article discusses the ethical and technical consequences of Artificial intelligence (hereinafter, A.I) applications and their usage of the European Union data protection legal framework to enable citizens to defend themselves against them. This goal is under the larger European Union Digital Single Market policy, which has concerns about how this subject correlates with personal data protection. The article has four sections. The first one introduces the main issue by describing the importance of AI applications in the contemporary world scenario. The second one describes some fundamental concepts about AI. The third section has an analysis of the ongoing policies for AI in the European Union and the Council of Europe proposal about ethics applicable to AI in the judicial systems. The fourth section is the conclusion, which debates the current legal mechanisms for citizens protection against fully automated decisions, based on European Union Law and in particular the General Data Protection Regulation. The conclusion will be that European Union Law is still under construction when it comes to providing effective protection to its citizens against automated inferences that are unfair or unreasonable.


Law and World ◽  
2021 ◽  
Vol 7 (5) ◽  
pp. 8-13

In the digital era, technological advances have brought innovative opportunities. Artificial intelligence is a real instrument to provide automatic routine tasks in different fields (healthcare, education, the justice system, foreign and security policies, etc.). AI is evolving very fast. More precisely, robots as re-programmable multi-purpose devices designed for the handling of materials and tools for the processing of parts or specialized devices utilizing varying programmed movements to complete a variety of tasks.1 Regardless of opportunities, artificial intelligence may pose some risks and challenges for us. Because of the nature of AI ethical and legal questions can be pondered especially in terms of protecting human rights. The power of artificial intelligence means using it more effectively in the process of analyzing big data than a human being. On the one hand, it causes loss of traditional jobs and, on the other hand, it promotes the creation of digital equivalents of workers with automatic routine task capabilities. “Artificial intelligence must serve people, and therefore artificial intelligence must always comply with people’s rights,” said Ursula von der Leyen, President of the European Commission.2 The EU has a clear vision of the development of the legal framework for AI. In the light of the above, the article aims to explore the legal aspects of artificial intelligence based on the European experience. Furthermore, it is essential in the context of Georgia’s European integration. Analyzing legal approaches of the EU will promote an approximation of the Georgian legislation to the EU standards in this field. Also, it will facilitate to define AI’s role in the effective digital transformation of public and private sectors in Georgia.


2020 ◽  
Vol 74 ◽  
pp. 03006
Author(s):  
Irena Nesterova

The growing use of facial recognition technologies has put them under the regulatory spotlight all around the world. The EU considers to regulate facial regulation technologies as a part of initiative of creating ethical and legal framework for trustworthy artificial intelligence. These technologies are attracting attention of the EU data protection authorities, e.g. in Sweden and the UK. In May, San Francisco was the first city in the US to ban police and other government agencies from using facial recognition technology, soon followed by other US cities. The paper aims to analyze the impact of facial recognition technology on the fundamental rights and values as well as the development of its regulation in Europe and the US. The paper will reveal how these technologies may significantly undermine fundamental rights, in particular the right to privacy, and may lead to prejudice and discrimination. Moreover, alongside the risks to fundamental rights a wider impact of these surveillance technologies on democracy and the rule of law needs to be assessed. Although the existing laws, in particular the EU General Data Protection Regulation already imposes significant requirements, there is a need for further guidance and clear regulatory framework to ensure trustworthy use of facial recognition technology.


2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Domenico Orlando ◽  
Wim Vandevelde

The article briefly describes the smart meters technology in the electricity field, its potentials and risks in terms of privacy and data protection, which could undermine the trust of customers. Then, the article delineates the EU legal framework that applies to the technology. A critical assessment of the latter follows, with the identification of some flaws. The focus shifts subsequently to the national level of legislation, when the Flemish laws on the matter are analysed. A different part is dedicated to the role that some technologies could have to reduce the risks and implement privacy. In conclusion, some recommendations are proposed to make the law more prone to enhance trust by the customers.   


2019 ◽  
Vol 15 (2) ◽  
pp. 162-176 ◽  
Author(s):  
Orla Lynskey

AbstractThis paper examines the application of the latest iterations of EU data protection law – in the General Data Protection Regulation, the Law Enforcement Directive and the jurisprudence of the Court of Justice of the EU – to the use of predictive policing technologies. It suggests that the protection offered by this legal framework to those impacted by predictive policing technologies is, at best, precarious. Whether predictive policing technologies fall within the scope of the data protection rules is uncertain, even in light of the expansive interpretation of these rules by the Court of Justice of the EU. Such a determination would require a context-specific assessment that individuals will be ill-placed to conduct. Moreover, even should the rules apply, the substantive protection offered by the prohibition against automated decision-making can be easily sidestepped and is subject to significant caveats. Again, this points to the conclusion that the protection offered by this framework may be more illusory than real. This being so, there are some fundamental questions to be answered – including the question of whether we should be building predictive policing technologies at all.


2018 ◽  
Vol 60 (1) ◽  
pp. 173-201
Author(s):  
Stefan A. Kaiser

With an increasing influence of computers and software, automation is affecting many areas of daily life. Autonomous systems have become a central notion, but many systems have reached only a lower level of automation and not yet full autonomy. Information technology and software have a strong impact and their industries are introducing their own business cultures. Even though autonomy will enable systems to act independently from direct human input and control in complex scenarios, the factors of responsibility, control, and attribution are of crucial importance for a legal framework. Legal responsibility has to serve as a safeguard of fundamental rights. Responsibility can be attributed by a special legal regime, and mandatory human override and fallback modes can assure human intervention and control. It is proposed to establish a precautionary regulatory regime for automated and autonomous systems to include general principles on responsibility, transparency, training, human override and fallback modes, design parameters for algorithms and artificial intelligence, and cyber security. States need to take a positivist approach, maintain their regulatory prerogative, and, in support of their exercise of legislative and executive functions, establish an expertise independent of industry in automation, autonomy, algorithms, and artificial intelligence.


2021 ◽  
Vol 23 (4) ◽  
pp. 457-484
Author(s):  
Niovi Vavoula

Abstract Since the past three decades, an elaborate legal framework on the operation of EU-Schengen information systems has been developed, whereby in the near future a series of personal data concerning almost all third-country nationals (TCN s) with an administrative or criminal law link with the EU/Schengen area will be monitored through at least one information system. This article provides a legal analysis on the embedment of Artificial Intelligence (AI) tools at the EU level in information systems for TCN s and critically examines the fundamental rights concerns that ensue from the use AI to manage and control migration. It discusses automated risk assessment and algorithmic profiling used to examine applications for travel authorisations and Schengen visas, the shift towards the processing of facial images of TCN s and the creation of future-proof information systems that anticipate the use of facial recognition technology. The contribution understands information systems as enabling the datafication of mobility and as security tools in an era whereby a foreigner is risky by default. It is argued that a violation of the right to respect for private life is merely the gateway for a series of other fundamental rights which are impacted, such as non-discrimination and right to effective remedies.


2019 ◽  
Vol 25 (4) ◽  
pp. 465-481 ◽  
Author(s):  
Adrián Todolí-Signes

Big data, algorithms and artificial intelligence now allow employers to process information on their employees and potential employees in a far more efficient manner and at a much lower cost than in the past. This makes it possible to profile workers automatically and even allows technology itself to replace human resources personnel in making decisions that have legal effects on employees (recruitment, promotion, dismissals, etc.). This entails great risks of worker discrimination and defencelessness, with workers unaware of the reasons underlying any such decision. This article analyses the protections established in the EU General Data Protection Regulation (GDPR) for safeguarding employees against discrimination. One of the main conclusions that can be drawn is that, in the face of the inadequacy of the GDPR in the field of labour relations, there is a need for the collective governance of workplace data protection, requiring the participation of workers’ representatives in establishing safeguards.


Sign in / Sign up

Export Citation Format

Share Document