scholarly journals Criminal Policy for Crimes Committed Using Artificial Intelligence Technologies: State, Problems, Prospects

Author(s):  
Alexander P. Sukhodolov ◽  
Artur V. Bychkov, ◽  
Anna M. Bychkova

The aim of the work is to study the criminal policy in relation to crimes committed using technologies based on artificial intelligence algorithms. The varieties of these crimes are described: phishing, the use of drones, the synthesis of fake information, attacks through automated autonomous systems and bots. Given the fact that artificial intelligence technologies are capable of self-learning and independent actions without direct human intervention and control, the key issue in the criminal policy regarding crimes committed using artificial intelligence algorithms is the question of the subject of criminal liability. The concepts existing in official documents and scientific literature are analyzed on this issue, in the development of scientific discussion, it is proposed to update the legal construction of “innocent harm”. The prospects of criminal policy in this direction are indicated in the creation of a fundamentally new variety of blanket norms: from “law as a text” to “law as a code” and its implementation by technological platforms

Author(s):  
Zarina Khisamova ◽  
Ildar Begishev

The humanity is now at the threshold of a new era when a widening use of artificial intelligence (AI) will start a new industrial revolution. Its use inevitably leads to the problem of ethical choice, it gives rise to new legal issues that require urgent actions. The authors analyze the criminal law assessment of the actions of AI. Primarily, the still open issue of liability for the actions of AI that is capable of self-learning and makes a decision to act / not to act, which is qualified as a crime. As a result, there is a necessity to form a system of criminal law measures of counteracting crimes committed with the use of AI. It is shown that the application of AI could lead to four scenarios requiring criminal law regulation. It is stressed that there is a need for a clear, strict and effective definition of the ethical boundaries in the design, development, production, use and modification of AI. The authors argue that it should be recognized as a source of high risk. They specifically state that although the Criminal Code of the Russian Fe­deration contains norms that determine liability for cybercrimes, it does not eliminate the possibility of prosecution for infringements committed with the use of AI under the general norms of punishment for various crimes. The authors also consider it possible to establish a system to standardize and certify the activities of designing AI and putting it into operation. Meanwhile, an autonomous AI that is capable of self-learning is considerably different from other phenomena and objects, and the situation with the liability of AI which independently decides to undertake an action qualified as a crime is much more complicated. The authors analyze the resolution of the European Parliament on the possibility of granting AI legal status and discuss its key principles and meaning. They pay special attention to the issue of recognizing AI as a legal personality. It is suggested that a legal fiction should be used as a technique, when a special legal personality of AI can be perceived as an unusual legal situation that is different from reality. It is believed that such a solution can eliminate a number of existing legal limitations which prevent active involvement of AI into the legal space.


Author(s):  
Al'bina Slavovna Lolaeva ◽  
Kristina Ushangievna Sakaeva

Ethical norms and the law are indispensably linked in the modern society. The adoption of major legal decisions is affected by various ethical rules. Artificial intelligence transforms the indicated problems into a new dimension. The systems that use artificial intelligence are becoming more autonomous by complexity of the tasks they accomplish, and their potential implications on the external environment. This diminishes the human ability to comprehend, predict, and control their activity. People usually underestimate the actual level of the autonomy of such systems. It is underlined that the machines based on artificial intelligence can learn from the own experience, and perform actions that are not meant by the developers. This leads to certain ethical and legal difficulties that are discussed in this article. In view of the specificity of artificial intelligence, the author makes suggestions on the direct responsibility of particular systems. Based on this logic, there are no fundamental reasons that prevent the autonomous should be held legally accountable for their actions. However, the question on the need or advisability to impose such type of responsibility (at the present stage specifically) remains open. This is partially due to the ethical issues listed above. It might be more effective to hold programmers or users of the autonomous systems accountable for the actions of these systems. However, it may decelerate innovations. This is namely why there is a need to find a perfect balance.


2018 ◽  
Vol 60 (1) ◽  
pp. 173-201
Author(s):  
Stefan A. Kaiser

With an increasing influence of computers and software, automation is affecting many areas of daily life. Autonomous systems have become a central notion, but many systems have reached only a lower level of automation and not yet full autonomy. Information technology and software have a strong impact and their industries are introducing their own business cultures. Even though autonomy will enable systems to act independently from direct human input and control in complex scenarios, the factors of responsibility, control, and attribution are of crucial importance for a legal framework. Legal responsibility has to serve as a safeguard of fundamental rights. Responsibility can be attributed by a special legal regime, and mandatory human override and fallback modes can assure human intervention and control. It is proposed to establish a precautionary regulatory regime for automated and autonomous systems to include general principles on responsibility, transparency, training, human override and fallback modes, design parameters for algorithms and artificial intelligence, and cyber security. States need to take a positivist approach, maintain their regulatory prerogative, and, in support of their exercise of legislative and executive functions, establish an expertise independent of industry in automation, autonomy, algorithms, and artificial intelligence.


2020 ◽  
Vol 159 ◽  
pp. 04025
Author(s):  
Danila Kirpichnikov ◽  
Albert Pavlyuk ◽  
Yulia Grebneva ◽  
Hilary Okagbue

Today, artificial intelligence (hereinafter – AI) becomes an integral part of almost all branches of science. The ability of AI to self-learning and self-development are properties that allow this new formation to compete with the human intelligence and perform actions that put it on a par with humans. In this regard, the author aims to determine whether it is possible to apply criminal liability to AI, since the latter is likely to be recognized as a subject of legal relations in the future. Based on a number of examinations and practical examples, the author makes the following conclusion: AI is fundamentally capable of being criminally liable; in addition, it is capable of correcting its own behavior under the influence of coercive measures.


Author(s):  
Kai Yan ◽  
Xin Lin ◽  
Wenfeng Ma ◽  
Yuxiao Zhang

AbstractArtificial intelligence is predicted to play a big part in self-learning, industrial automation that will negotiate the bandwidth of structural health and control systems. The industrial structural health and control system based on discrete sensors possesses insufficient spatial coverage of sensing information, while the distributed condition monitoring has been mainly studied at the sensor level, relatively few studies have been conducted at the artificial intelligence level. This paper presents an innovative method for distributed structural health and control systems based on artificial intelligence. The structural condition was divided into regional and local features, the feature extraction and characterization are performed separately. Structural abnormality recognition and risk factor calculation method were proposed by considering the response values and the distribution patterns of both the regional and the local structural behaviours. The test results show that the method can effectively identify the full-scale and local damage of the structure, respectively. Subsequently, structural safety assessment method for long-span structures at kilometres level in view of fully length strain distributions measured by distributed fiber optic sensors were developed. A series of load tests on the long-span structure were carried out. Finite element (FE) model was developed using finite element code, ABAQUS, and an extensive parametric study was conduct to explore the effect of load cases on the structural responses. The differences in the structural response results among load test, structural safety assessment and FE simulation were investigated. It is shown that AI-based self-learning system could offer suitable speed in deployment, reliability in solution and flexibility to adjust in distributed structural health monitoring and control.


Author(s):  
Svetlana Sergeevna Gorokhova

The subject of this research is certain theoretical aspects of public legal responsibility that may emerge in the spheres and situations of the use of artificial intelligence and robotic autonomous systems takes place. Special attention is given to interpretation of public legal responsibility as a legal category, and its role within the system of legal regulation of public relations in the country. The article explores the basic aspects of public responsibility in the sphere of potential use of the systems equipped with technological solutions based on artificial intelligence. The author describes the possible risks determined by the development and implementation of such technologies in accordance with trends of scientific and technological progress. The conclusion is made that currently in the Russian Federation does not have a liability system applicable particularly to damage or losses resulting from the use of new technologies, such as artificial intelligence. However, the existing liability regime at least ensures the basic protection for the victims suffered from the use of artificial intelligence technologies. However, the peculiar characteristics of these technologies and complexity of their application may hinder payment of compensations for inflicted harm in all cases when it seems justified, and not ensure fair and effective allocation of responsibility in a number of cases, including the violation of non-property rights of citizens.


2019 ◽  
Vol 27 (2) ◽  
pp. 147
Author(s):  
Rofi Aulia Rahman ◽  
Rizki Habibulah

The pace of technology evolution is very fast. The technology has brought us to the limitless world and becoming our ally in every daily life. The technology has created a visionary autonomous agent that could surpass human capability with little or without human intervention, called by Artificial Intelligence (AI). In the implementation of AI in every area that could be in industrial, health, agriculture, artist, etc. Consequently, AI can damage individual or congregation life that is protected by criminal law. In the current Indonesian criminal system, it just acknowledges natural person and legal person (recht persoon) as the subject of law that can be imposed by criminal sanction. Hitherto and near foreseeable future AI has a notable role in every aspect, which affects also criminal aspects due to the damage resulted. AI has no sufficient legal status to be explained in the Indonesian criminal system. In this paper, the author will assess whether the current criminal system of Indonesia can sue the criminal liability of artificial intelligence, and also will make it clear to whom the possibility of criminal liability of artificial intelligence shall be charged.


Author(s):  
Abeer Elsayed Fayed

This paper summarises the arguments and counterarguments within the scientific discussion on artificial intelligence (AI) in preparing a marketing plan for e-marketing organizations. This research aims to identify the extent of the contribution of AI in preparing the marketing plan. The author noted that intended to know how e-marketing companies could use AI techniques in situation analysis, analyze competitors' strategies, strategic goals, preparing marketing strategies, preparing an estimated marketing budget, and control a marketing plan. Systematization of the scientific background and approaches on preparing a marketing plan for e-marketing organizations indicates that many companies, especially small companies, marketing their products via the Internet, cannot develop a successful marketing plan. In turn, it could be solved through the use of AI techniques. The study was conducted on a group of companies that market their products via the Internet in the Kingdom of Saudi Arabia. To gain the research goal, this study was carried out in the following logical sequence: 1) developing the stratified sample by collecting statistical information for 141 company in a variety of fields; 2) analyzing the data using SPSS; 3) predicting how AI could be used in preparing the marketing plan; 4) identifying the arrangement of the steps for preparing the marketing plan in terms of the ability of AI techniques. The methodological tools of the study were methods of the multiple regression analysis and the Friedman test. The study empirically confirms and theoretically proves that AI contributes significantly in developing marketing plans through its great contribution to environmental analysis and analysis of competitors' strategies and setting marketing goals. Besides, AI contributes to preparing the budget and appreciating the marketing plan, to its evaluation and control. The author mentioned that AI provides understanding and selecting target markets and sectors, targeting customers, and preparing appropriate marketing mix strategies for each market sector. Therefore, the study provides recommendations to online organizations to use AI in preparing their marketing plan because of its great ability to contribute to this.


Author(s):  
Victor Shestak ◽  
Aleksander Volevodz ◽  
Vera Alizade

The authors examine the possibility of holding artificial intelligence (AI) criminally liable under the current U.S. criminal legislation and study the opinions of Western lawyers who believe that this possibility for a machine controlled by AI may become reality in the near future. They analyze the requirements for criminal liability as determined by American legislators: a willful unlawful act or omission of an act (actus reus), criminal intent (mens rea), i.e. the person knowingly commits a criminal act or is negligent, as well as three basic models of AI’s criminal liability. In the first model, a crime is committed through the actions of another person, i.e. the cases when the subject of crime does not have sufficient cognitive abilities to understand the criminal intent and, moreover, to be guided by it. This category of persons includes minors, persons with limited legal capacity and modern cybernetic systems, who cannot be viewed as capable of cognition that equals human cognition. The latter are consi­dered to be innocent of a criminal act because their actions are controlled by an algorithm or a person who has indirect program control. In the second model, a crime is committed by a being who is objectively guilty of it. A segment of the program code in intellectual systems allows for some illegal act by default, for example, includes a command to unconditionally destroy all objects that the system recognizes as dange­rous for the purpose that such AI is working to fulfill. According to this model, the person who gives the unlawful command should be held liable. If such a «collaborator» is not hidden, criminal liability should be imposed on the person who gives an unlawful command to the system, not on the performer, because the algorithmic system that determines the actions of the performer is itself unlawful. Thus, criminal liability in this case should be imposed on the persons who write or use the program, on the condition that they were aware of the unlawfulness of orders that guide the actions of the performer. Such crimes include acts that are criminal but cannot be prevented by the performer — the AI system. In the third model, AI is directly liable for the acts that contain both a willful action and the unlawful intent of the machine. Such liability is possible if AI is recognized as a subject of criminal law, and also if it independently works out an algorithm to commit an act leading to publically dangerous consequen­ces, or if such consequences are the result of the system’s omission to act according to the initial algorithm, i.e. if its actions are willful and guilty.


Sign in / Sign up

Export Citation Format

Share Document