Legal Challenges of Automated and Autonomous Systems

2018 ◽  
Vol 60 (1) ◽  
pp. 173-201
Author(s):  
Stefan A. Kaiser

With an increasing influence of computers and software, automation is affecting many areas of daily life. Autonomous systems have become a central notion, but many systems have reached only a lower level of automation and not yet full autonomy. Information technology and software have a strong impact and their industries are introducing their own business cultures. Even though autonomy will enable systems to act independently from direct human input and control in complex scenarios, the factors of responsibility, control, and attribution are of crucial importance for a legal framework. Legal responsibility has to serve as a safeguard of fundamental rights. Responsibility can be attributed by a special legal regime, and mandatory human override and fallback modes can assure human intervention and control. It is proposed to establish a precautionary regulatory regime for automated and autonomous systems to include general principles on responsibility, transparency, training, human override and fallback modes, design parameters for algorithms and artificial intelligence, and cyber security. States need to take a positivist approach, maintain their regulatory prerogative, and, in support of their exercise of legislative and executive functions, establish an expertise independent of industry in automation, autonomy, algorithms, and artificial intelligence.

Lentera Hukum ◽  
2019 ◽  
Vol 6 (2) ◽  
pp. 331
Author(s):  
Viony Kresna Sumantri

Modern technology is developing rapidly. One branch of industrial technology that is particularly popular at the moment is artificial intelligence (AI) that facilitates society's daily life. On smartphones, artificial intelligence can be found in map applications, personal assistants, shopping websites, and various other applications. Saudi Arabia granted an AI-based robot named Sophia citizenship, and the Shibuya Mirai robot was granted a residence permit by Japan. AI-based technology is used every day and has become a common thing in various parts of the world; however, in Indonesia, legal regulations regarding AI do not yet exist. As a result, a legal vacuum has emerged. When a loss occurs, responsibility can be borne by various parties ranging from consumers, producers, third parties (such as robot trainers or shipping couriers) to the robot itself. Which party will be determined responsible depends upon how a country positions AI. If Indonesia follows in Saudi Arabia's footsteps, then the responsibility will be borne by the AI robot as a citizen. The robot will have the right to sue and be sued, to get the same position before the law, including other rights and obligations, enjoyed by human citizens. Artificial intelligence law-making is a very complicated process and will involve many parties. How Indonesia positions AI is very crucial, particularly in the event of harm or danger caused by AI systems. Various frameworks and concepts can be used, ranging from equating artificial intelligence to living beings, such as humans, pets, or ordinary products to creating entirely new concepts for a legal framework regulating AI-based systems. Keywords: Artificial Intelligence, Responsibility, AI Law.


Author(s):  
Al'bina Slavovna Lolaeva ◽  
Kristina Ushangievna Sakaeva

Ethical norms and the law are indispensably linked in the modern society. The adoption of major legal decisions is affected by various ethical rules. Artificial intelligence transforms the indicated problems into a new dimension. The systems that use artificial intelligence are becoming more autonomous by complexity of the tasks they accomplish, and their potential implications on the external environment. This diminishes the human ability to comprehend, predict, and control their activity. People usually underestimate the actual level of the autonomy of such systems. It is underlined that the machines based on artificial intelligence can learn from the own experience, and perform actions that are not meant by the developers. This leads to certain ethical and legal difficulties that are discussed in this article. In view of the specificity of artificial intelligence, the author makes suggestions on the direct responsibility of particular systems. Based on this logic, there are no fundamental reasons that prevent the autonomous should be held legally accountable for their actions. However, the question on the need or advisability to impose such type of responsibility (at the present stage specifically) remains open. This is partially due to the ethical issues listed above. It might be more effective to hold programmers or users of the autonomous systems accountable for the actions of these systems. However, it may decelerate innovations. This is namely why there is a need to find a perfect balance.


2021 ◽  
Vol 23 (4) ◽  
pp. 457-484
Author(s):  
Niovi Vavoula

Abstract Since the past three decades, an elaborate legal framework on the operation of EU-Schengen information systems has been developed, whereby in the near future a series of personal data concerning almost all third-country nationals (TCN s) with an administrative or criminal law link with the EU/Schengen area will be monitored through at least one information system. This article provides a legal analysis on the embedment of Artificial Intelligence (AI) tools at the EU level in information systems for TCN s and critically examines the fundamental rights concerns that ensue from the use AI to manage and control migration. It discusses automated risk assessment and algorithmic profiling used to examine applications for travel authorisations and Schengen visas, the shift towards the processing of facial images of TCN s and the creation of future-proof information systems that anticipate the use of facial recognition technology. The contribution understands information systems as enabling the datafication of mobility and as security tools in an era whereby a foreigner is risky by default. It is argued that a violation of the right to respect for private life is merely the gateway for a series of other fundamental rights which are impacted, such as non-discrimination and right to effective remedies.


2021 ◽  
Vol 9 (2) ◽  
pp. 1214-1219
Author(s):  
Sheha kothari, Et. al.

Artificial intelligence (AI) has made incredible progress, resulting in the most sophisticated software and standalone software. Meanwhile, the cyber domain has become a battleground for access, influence, security and control. This paper will discuss key AI technologies including machine learning in an effort to help understand their role in cyber security and the implications of this new technology. This paper discusses and highlights the different uses of machine learning in cyber security.


2020 ◽  
Vol 32 (3) ◽  
pp. 471-502
Author(s):  
Annabel Sharma

Abstract Diffuse nutrient pollution from agriculture has been the concern of policymakers for several decades, and yet it remains a persistent environmental issue. The current approach to mitigating the problem is predominantly via command and control regulation within the Nitrates Directive and the Water Framework Directive. This article will set out how diffuse pollution can be considered a wicked policy problem which acts as an explanation of how it has eluded the current regulatory regime. It will further establish that the traditional planning process overlooked the complexity of the problem. Finally, it will illustrate the ineffectiveness of the current regulatory framework to mitigate the problem. This will be exemplified through the legal framework of Northern Ireland.


Author(s):  
Svetlana Sergeevna Gorokhova

The subject of this research is certain theoretical aspects of public legal responsibility that may emerge in the spheres and situations of the use of artificial intelligence and robotic autonomous systems takes place. Special attention is given to interpretation of public legal responsibility as a legal category, and its role within the system of legal regulation of public relations in the country. The article explores the basic aspects of public responsibility in the sphere of potential use of the systems equipped with technological solutions based on artificial intelligence. The author describes the possible risks determined by the development and implementation of such technologies in accordance with trends of scientific and technological progress. The conclusion is made that currently in the Russian Federation does not have a liability system applicable particularly to damage or losses resulting from the use of new technologies, such as artificial intelligence. However, the existing liability regime at least ensures the basic protection for the victims suffered from the use of artificial intelligence technologies. However, the peculiar characteristics of these technologies and complexity of their application may hinder payment of compensations for inflicted harm in all cases when it seems justified, and not ensure fair and effective allocation of responsibility in a number of cases, including the violation of non-property rights of citizens.


Author(s):  
Alexander P. Sukhodolov ◽  
Artur V. Bychkov, ◽  
Anna M. Bychkova

The aim of the work is to study the criminal policy in relation to crimes committed using technologies based on artificial intelligence algorithms. The varieties of these crimes are described: phishing, the use of drones, the synthesis of fake information, attacks through automated autonomous systems and bots. Given the fact that artificial intelligence technologies are capable of self-learning and independent actions without direct human intervention and control, the key issue in the criminal policy regarding crimes committed using artificial intelligence algorithms is the question of the subject of criminal liability. The concepts existing in official documents and scientific literature are analyzed on this issue, in the development of scientific discussion, it is proposed to update the legal construction of “innocent harm”. The prospects of criminal policy in this direction are indicated in the creation of a fundamentally new variety of blanket norms: from “law as a text” to “law as a code” and its implementation by technological platforms


Author(s):  
A.H. Rustamzade ◽  
I.M. Aliyev

The article notes that today the global problem is the almost complete absence of normative legal regulation of the functioning and activities of artificial intelligence and standardization in this area should be implemented at the global level. However, the world community is just beginning to realize the real and potential nuances of the influence of fully automated systems on vital areas of social relations, on the growth of ethical, social and legal problems associated with this trend. The authors poses the question of who will directly be responsible for the wrong decision implemented in life proposed by “artificial intelligence” and various options for answering it are proposed. Only a conscious subject can be the subject of responsibility, and since weak systems do not have autonomy, on them, i.e. artificial intelligence cannot be blamed. Measures of legal liability are simply not applicable to it, for example, the elementary impossibility of artificial intelligence to recognize the consequences of its harmful actions. In conclusion, it is proved that with all its development and the speed of information processing, many times exceeding even the potential capabilities of a person, artificial intelligence remains a program with material and technical support tied to it. Only a person is responsible for the actions of mechanisms, is tested for strength. As for the direct responsibility of artificial intelligence, in the current legal and social conditions, the question of its hypothetical responsibility is of a dead-end nature, since the measures of legal responsibility are simply inapplicable to it, for example, it is elementary for artificial intelligence to realize the consequences of its harmful actions. Even if artificial intelligence can simulate human intelligence, it will not be self-aware, and therefore artificial intelligence can in no way claim any special fundamental rights


2020 ◽  
Vol 2020 (3) ◽  
pp. 331-1-331-13
Author(s):  
Benjamin Yüksel ◽  
Klaus Schwarz ◽  
Reiner Creutzburg

Cyber security has become an increasingly important topic in recent years. The increasing popularity of systems and devices such as computers, servers, smartphones, tablets and smart home devices is causing a rapidly increasing attack surface. In addition, there are a variety of security vulnerabilities in software and hardware that make the security situation more complex and unclear. Many of these systems and devices also process personal or secret data and control critical processes in the industry. The need for security is tremendously high. The owners and administrators of modern computer systems are often overwhelmed with the task of securing their systems as the systems become more complex and the attack methods increasingly intelligent. In these days a there are a lot of encryption and hiding techniques available. They are used to make the detection of malicious software with signature based scanning methods very difficult. Therefore, novel methods for the detection of such threats are necessary. This paper examines whether cyber threats can be detected using modern artificial intelligence methods. We develop, describe and test a prototype for windows systems based on neural networks. In particular, an anomaly detection based on autoencoders is used. As this approach has shown, it is possible to detect a wide range of threats using artificial intelligence. Based on the approach in this work, this research topic should be continued to be investigated. Especially cloud-based solutions based on this principle seem to be very promising to protect against modern threats in the world of cyber security.


Sign in / Sign up

Export Citation Format

Share Document