scholarly journals Consideration of artificial intelligence through the prism of legal personality

Author(s):  
A.H. Rustamzade ◽  
I.M. Aliyev

The article notes that today the global problem is the almost complete absence of normative legal regulation of the functioning and activities of artificial intelligence and standardization in this area should be implemented at the global level. However, the world community is just beginning to realize the real and potential nuances of the influence of fully automated systems on vital areas of social relations, on the growth of ethical, social and legal problems associated with this trend. The authors poses the question of who will directly be responsible for the wrong decision implemented in life proposed by “artificial intelligence” and various options for answering it are proposed. Only a conscious subject can be the subject of responsibility, and since weak systems do not have autonomy, on them, i.e. artificial intelligence cannot be blamed. Measures of legal liability are simply not applicable to it, for example, the elementary impossibility of artificial intelligence to recognize the consequences of its harmful actions. In conclusion, it is proved that with all its development and the speed of information processing, many times exceeding even the potential capabilities of a person, artificial intelligence remains a program with material and technical support tied to it. Only a person is responsible for the actions of mechanisms, is tested for strength. As for the direct responsibility of artificial intelligence, in the current legal and social conditions, the question of its hypothetical responsibility is of a dead-end nature, since the measures of legal responsibility are simply inapplicable to it, for example, it is elementary for artificial intelligence to realize the consequences of its harmful actions. Even if artificial intelligence can simulate human intelligence, it will not be self-aware, and therefore artificial intelligence can in no way claim any special fundamental rights

Author(s):  
IRINA VIKTOROVNA ERMAKOVA ◽  
◽  
◽  

The subject of the research is legal norms aimed at regulating by law relations in the field of concluding and executing smart contracts, including issues of protecting the rights of the parties to such contracts, including consumers. The object of the research is social relations arising in the process of creating, concluding and executing of smart contracts. Particular attention is paid to the theoretical and practical aspects of the definition of the concept of “smart contract” and its essence, as well as its legal status. In addition, the article considers approaches to defining the essence of institutions that are closely related to the category of “smart contract”, such as “cryptocurrency”, “digital ruble”, “mining”. The aspects of the protection of fundamental rights of the parties involved in the considered legal relationship, including consumers, are also analyzed. Examples of court decisions regarding the corresponding category of cases are given. The novelty of the research lies in determining the current approaches in relation to the essence, concept and legal status of smart contracts, including the current position of law enforcement practice in relation to this issue. In addition, the novelty of the study lies in considering the practical aspects of the conclusion and execution of smart contracts, including, indicating examples of blockchain platforms on the basis of which smart contracts can function. Ultimately, the study led to the development by the author of some proposals in order to improve the relevant legislation. In particular, the author proposed to consolidate at the legislative level the legal definition of the concept of “smart contract”, indicating the appropriate wording.


Author(s):  
Svetlana Sergeevna Gorokhova

The subject of this research is certain theoretical aspects of public legal responsibility that may emerge in the spheres and situations of the use of artificial intelligence and robotic autonomous systems takes place. Special attention is given to interpretation of public legal responsibility as a legal category, and its role within the system of legal regulation of public relations in the country. The article explores the basic aspects of public responsibility in the sphere of potential use of the systems equipped with technological solutions based on artificial intelligence. The author describes the possible risks determined by the development and implementation of such technologies in accordance with trends of scientific and technological progress. The conclusion is made that currently in the Russian Federation does not have a liability system applicable particularly to damage or losses resulting from the use of new technologies, such as artificial intelligence. However, the existing liability regime at least ensures the basic protection for the victims suffered from the use of artificial intelligence technologies. However, the peculiar characteristics of these technologies and complexity of their application may hinder payment of compensations for inflicted harm in all cases when it seems justified, and not ensure fair and effective allocation of responsibility in a number of cases, including the violation of non-property rights of citizens.


Lex Russica ◽  
2020 ◽  
pp. 78-85
Author(s):  
A. V. Nechkin

In the paper, the author uses general scientific and specific scientific methods of cognition to scrutinize the problems of constitutional and legal regulation of public relations in Russia, related to the widespread introduction of artificial intelligence technology. Based on the results of the research, the author concludes that modern Russian constitutional legislation, even in its current form, makes it possible to regulate the nascent social relations associated with the widespread introduction of artificial intelligence technology. In particular, it is noted that the provisions of the Constitution of the Russian Federation allow for an expanded interpretation of the concept "personality", covering not only a person, but also highly developed artificial intelligence. According to the author, the constitutional and legal status of highly developed artificial intelligence should be based on the image and likeness of the constitutional and legal status of a person. The only exceptions should be the following. First is legal personality, which by its legal nature should be extremely close to the legal personality of bodies and organizations and should arise from the moment the relevant decision is made by the competent state authority. Rights, freedoms and obligations should imply a limited amount of personal rights and freedoms, the complete absence of political and socioeconomic rights. The last exception is the limited passive dispositive capacity of artificial intelligence. In addition, the main element in the structure of the constitutional and legal status of artificial intelligence in Russia should be universal restrictions on its rights and freedoms, which would serve as analogues of natural human physiological restrictions and would not allow artificial intelligence to acquire evolutionary advantages over humans. Thus, the structure of the constitutional and legal status of artificial intelligence as a person can and should in the future look like this: legal personality; rights, freedoms and duties; guarantees that ensure the implementation of rights and freedoms; universal restrictions on rights and freedoms.


Author(s):  
Svetlana Sergeevna Gorokhova

The subject of this article is the social relations established in the process of scientific and technological development in IT sphere that support the work of artificial intelligence systems and relate to scientific discussion on the role of artificial intelligence, robots and objects of robotics in the legal field. The author examines the relevant questions of identification of artificial intelligence systems as a subject, object or other legal phenomenon within the structure of legal relations. The research problem consists in the fact outstripping that the scientific-technological progress outstripped legal regulation of interaction between an individual, society and artificial intelligence, which justifies the need for creation a cyber-law theory. The opinions on the matter in foreign and national literature are analyzed. The article outlines the trends and prospects of implementation of artificial intelligence in various social and economic spheres; determines the contrast of opinions regarding the problems of identification of artificial intelligence systems, as well as incorporation of artificial intelligence into the established legal reality. The author presents and substantiates an original conceptual version of inclusion of artificial intelligence into the legal field, based on the principle of assignment of partial legal capacity to strong and super strong artificial intelligence. The positions on legal responsibility in relations complicated by the presence of artificial intelligence are defined.


2020 ◽  
Vol 16 (3) ◽  
pp. 23-33
Author(s):  
Светлана Горохова

An urgent problem of transforming Russian legal system at the present stage of its development is to find an optimal balance in determining fundamental approaches to the legal regulation of public relations complicated by cyberphysical systems, artificial intelligence, various types of robots and robotics objects, as well as to consider the possibility of giving legal personality to weak and strong artificial intelligence in various branches of law and legislation. Purpose: analysis of the issues related to determining the legal status of artificial intellectual systems, taking into account modern requirements dictated by scientific and technological progress, the development of social relations, and the rule-of-law principles, aimed at ensuring respect for the individual rights and legitimate interests, society and the state Methods: on the basis of dialectical and metaphysical methods, general scientific (analysis, synthesis, comparative law, etc.), and specific scientific (legal-dogmatic, cybernetic, interpretation) methods of scientific knowledge are used. Results: at the present stage of technological development, we should talk about the existence of a weak narrow-purpose AI (Narrow AI) and a strong General-purpose AI (General AI). Super-strong intelligence (Super AI) does not yet exist, although its development is predicted in the future. Narrow AI, of course, can not reach natural intelligence, so, based on its internal properties, it can not be considered a subject in relations under any circumstances. In contrast to narrow AI (Narrow AI), General AI (GAI) has a developed intelligence comparable to that of a human in certain characteristics. The theoretical discussion of giving an artificial intelligence the status of a subject or a “quasi” subject of law makes sense only for technological solutions in the rank of General AI and Super AI. In the case of an AIS, it can only be a question of partial legal capacity. Partial legal capacity is a status that applies to subjects that have legal capacity only in accordance with specific legal norms, but are otherwise not obligated or entitled. Therefore, when choosing the concept of legislative assignment of partial legal capacity to the AIS, it is necessary to determine which specific rights or “right obligations” will be granted to General AI and Super AI.


Author(s):  
Denis Tikhomirov

The purpose of the article is to typologize terminological definitions of security, to find out the general, to identify the originality of their interpretations depending on the subject of legal regulation. The methodological basis of the study is the methods that made it possible to obtain valid conclusions, in particular, the method of comparison, through which it became possible to correlate different interpretations of the term "security"; method of hermeneutics, which allowed to elaborate texts of normative legal acts of Ukraine, method of typologization, which made it possible to create typologization groups of variants of understanding of the term "security". Scientific novelty. The article analyzes the understanding of the term "security" in various regulatory acts in force in Ukraine. Typological groups were understood to understand the term "security". Conclusions. The analysis of the legal material makes it possible to confirm that the issues of security are within the scope of both legislative regulation and various specialized by-laws. However, today there is no single conception on how to interpret security terminology. This is due both to the wide range of social relations that are the subject of legal regulation and to the relativity of the notion of security itself and the lack of coherence of views on its definition in legal acts and in the scientific literature. The multiplicity of definitions is explained by combinations of material and procedural understanding, static - dynamic, and conditioned by the peculiarities of a particular branch of legal regulation, limited ability to use methods of one or another branch, the inter-branch nature of some variations of security, etc. Separation, common and different in the definition of "security" can be used to further standardize, in fact, the regulatory legal understanding of security to more effectively implement the legal regulation of the security direction.


Author(s):  
Rinat Mikhailovich Karimov

In this article Karimov analyzes whether it is necessary to amend available safety measures in relation to judicial authorities of the Russian Federation. The aim of the research is to analyze the current order of weapon issue to judges in the Russian Federation. The object of the research is the social relations rising in the process of implementation of legal provisions about the order of weapon issue to judges in the Russian Federation. The subject of the research is the legal acts that regulate the order of weapon issue to judges in the Russian Federaton. The researcher analyzes kinds of weapons that can be issued to a judge upon his or her written inquiry. The research is based on the comparative legal analysis of previous provisions about the order of weapon issue to judges and legal provisions that have been implemented just lately. The analysis is also based on the use of such research methods as analysis and synthesis, generalisation and logical research method. The author of the article proves the idea that the legal specificiation of the order of weapon issue to judges in the Russian Federation will eliminate possibility of attacking judges or their family members. The author focuses on the gaps in relevant legal regulations and suggests to review and make changes in the current law that regulates the order of weapon issue to judges. 


2020 ◽  
pp. 447-456
Author(s):  
Г. В. Луцька

The article considers the problem of application of artificial intelligence in the law of Ukraine in general and in the notarial and civil process in particular. The legal consequences of the legal regime of temporary occupation of some territories of Ukraine are indicated and the ways to eliminate obstacles in the protection and defense of the rights of citizens of Ukraine in these territories are determined. The legal construction of «artificial intelligence» is studied and its types are offered. The conclusion about the expediency of using intelligent computer programs, intelligent information technologies as types of artificial intelligence in notarial and executive processes is substantiated. It is proposed to consider the use of artificial intelligence in notarial and civil proceedings for citizens of Ukraine living in the Autonomous Republic of Crimea or in the occupied territories of Donetsk and Luhansk regions, within the limits, in the manner and in the manner prescribed by law of Ukraine. It is proved that the introduction of artificial intelligence through the mechanism of protection and defense of human and civil rights and freedoms in the civil process must be adapted to social relations that arise and exist, not violate the constitutional rights and freedoms of man and citizen in Ukraine and have a legal basis. Based on the scientific and practical analysis of the Civil Procedure Code of Ukraine, it is proposed for citizens of Ukraine living in the Autonomous Republic of Crimea or in the occupied territories of Donetsk and Luhansk regions to establish that lawsuits, separate and injunctive proceedings are entirely online. The procedure (procedure) and features of such proceedings with the use of various types of artificial intelligence (such as chatbots and other information intelligence technologies) should be defined in the Civil Procedure Code of Ukraine. It is noted that the introduction of the above mechanism to protect and defend the rights of citizens living in the Autonomous Republic of Crimea or in the occupied territories of Donetsk and Luhansk regions through intelligent computer programs will require proper maintenance and support of such programs to prevent leakage of information, leakage of personal data, etc. The conclusion is substantiated that e-litigation and remote notarial proceedings will increase the effectiveness of notarial and judicial forms of protection and protection of rights and make these state forms of protection more flexible, able to anticipate the peculiarities of procedural actions involving residents of the temporarily occupied territories.


2020 ◽  
pp. 61-66
Author(s):  
K. Yefremova

Problem setting. Artificial intelligence is rapidly affecting the financial sector with countless potential benefits in terms of improving financial services and compliance. In the financial sector, artificial intelligence algorithms are already trusted to account for transactions, detect fraudulent schemes, assess customer creditworthiness, resource planning and reporting. But the introduction of such technologies entails new risks. Analysis of resent researches and publications. The following scientists were engaged in research of the specified question: D.W. Arner, J. Barberis, R.P. Buckley, Jon Truby, Rafael Brown, Andrew Dahdal, O. A. Baranov, O. V. Vinnyk, I. V. Yakovyuk, A. P. Voloshin, A. O. Shovkun, N.B. Patsuriia. Target of research. The aim of the article is to identify key strategic issues in developing mechanisms to ensure the effective implementation and use of artificial intelligence in the financial services market. Article’s main body. The paper investigates an important scientific and practical problem of legal regulation of artificial intelligence used by financial services market participants. The legal risks associated with the use of artificial intelligence programs in a particular area are considered. The most pressing risks to address targeted AI regulation are fundamental rights, data confidentiality, security and effective performance, and accountability. This article argues that the best way to encourage a sustainable future in AI innovation in the financial sector is to support a proactive regulatory approach prior to any financial harm occurring. This article argues that it would be optimal for policymakers to intervene early with targeted, proactive but balanced regulatory approaches to AI technology in the financial sector that are consistent with emerging internationally accepted principles on AI governance. Conclusions and prospects for the development. The adoption of rational regulations that encourage innovation whilst ensuring adherence to international principles will significantly reduce the likelihood that AI-related risks will develop into systemic problems. Leaving the financial sector only with voluntary codes of practice may encourage experimentation that in turn may result in innovative benefits – but it will definitely render customers vulnerable, institutions exposed and the entire financial system weakened.


Author(s):  
Ildar Begishev ◽  
Zarina Khisamova

The topics of artificial intelligence (AI) and the development of intelligent technologies are highly relevant and important in the modern digital world. Over its fifty years of history, AI has developed from a theoretical concept to an intelligent system capable of making independent decisions. Key advantages of using AI include, primarily, an opportunity for mankind to get rid of routine work and to engage in creative activities that machines are not capable of. According to international consulting agencies, global business investments in digital transformation will reach 58 trillion USD by 2021, while global GDP will grow by 14 %, or 15.7 trillion USD, in connection with the active use of AI. However, its rapid evolvement poses new threats connected with AI’s ability to self-develop that the state and the society have to counteract; specifically, they have to introduce normative regulation of AI activities and to address threats arising from its functioning. The authors present a thorough analysis of the opinions of leading researchers in the field of social aspects of AI’s functioning. They also state that the regulation of the status of AI as a legal personality, not to mention its ability to commit legally meaningful actions, remains an open question today. At present, the process of creating a criminological basis for applying AI, connected with the development of new intelligent technologies, is underway, it requires actions and decisions aimed at preventing possible negative effects of its use and reacting to them on a state level. The authors’ analysis of the history of AI’s emergence and development has allowed them to outline its key features that pose criminological risks, to determine criminological risks of using AI and to present their own classification of such risks. In particular, they single out direct and indirect criminological risks of using AI. A detailed analysis has allowed the authors to identify an objective need for establishing special state agencies that will develop state policy in the sphere of normative legal regulation, control and supervision over the use of AI.


Sign in / Sign up

Export Citation Format

Share Document