Algorithmic law – a legal framework for Artificial Intelligence in Latin America

Author(s):  
Maximiliano Marzetti
2021 ◽  
pp. 34-41
Author(s):  
Mohamed Hamada ◽  
Daniya Temirkhanova ◽  
Diana Serikbay ◽  
Sanzhar Salybekov ◽  
Saltanat Omarbek

The main objective of the research is identifying the effectiveness of artificial intelligence in the business sphere of Kazakhstan. The urgency of this problem is due to the fact that the Kazakhstani market for artificial intelligence is at the initial stage of development. The main obstacle to the introduction of artificial intelligence is the unpreparedness of managers of small and medium-sized businesses for the application of artificial intelligence technologies and, of course, the high cost of their implementation. In the study, we proceeded from the key thesis that business in Kazakhstan is striving for digital transformation. We set a goal to determine the attitude and degree of readiness of Kazakhstani business to the implementation and practical application of artificial intelligence, to describe the cases of using artificial intelligence by Kazakhstani business, to identify the main questions that arise in business at this stage, to study the legal aspects of using artificial intelligence in business and to present the big picture compliance / inconsistency of the existing legal framework with the goals and objectives of the development of artificial intelligence, provide recommendations for eliminatinge xisting barriers and stimulating businesses to implement the technology. Within the framework of this study, the concept of artificial intelligence is defined in its broadest sense - as a set of technologies for processing various types of data and information, in particular those capable of interpreting such data, extracting knowledge and using it to achieve certain goals.


2021 ◽  
Vol 34 (02) ◽  
pp. 964-972
Author(s):  
Olga Vladimirovna Markova ◽  
Ekaterina Yevgenievna Listopad ◽  
Aleksandr Vladimirovich Shelygov ◽  
Alexander Grigorievich Fedorov ◽  
Igor Valentinovich Kiselevich

The article deals with the economic and legal aspects of the innovative activity of enterprises in the context of the digital economy. The authors have established that the innovative activity of enterprises includes also the development of artificial intelligence and robotics and that in the current conditions when creating and using artificial intelligence technologies, the issue of ensuring national security in the digital environment becomes extremely important. In this case, the strategic goal of ensuring information security is to protect the vital interests of the individual and society against internal and external threats associated with the application of information technologies for various purposes contrary to civil law. It is proved that innovations will increase the investment attractiveness of the business, maintain a balance of creative freedom and internal control measures, self-regulation in the field of digital technologies, and develop a unified legal framework in the economic space.


2019 ◽  
Vol 5 (2) ◽  
pp. 75-91
Author(s):  
Alexandre Veronese ◽  
Alessandra Silveira ◽  
Amanda Nunes Lopes Espiñeira Lemos

The article discusses the ethical and technical consequences of Artificial intelligence (hereinafter, A.I) applications and their usage of the European Union data protection legal framework to enable citizens to defend themselves against them. This goal is under the larger European Union Digital Single Market policy, which has concerns about how this subject correlates with personal data protection. The article has four sections. The first one introduces the main issue by describing the importance of AI applications in the contemporary world scenario. The second one describes some fundamental concepts about AI. The third section has an analysis of the ongoing policies for AI in the European Union and the Council of Europe proposal about ethics applicable to AI in the judicial systems. The fourth section is the conclusion, which debates the current legal mechanisms for citizens protection against fully automated decisions, based on European Union Law and in particular the General Data Protection Regulation. The conclusion will be that European Union Law is still under construction when it comes to providing effective protection to its citizens against automated inferences that are unfair or unreasonable.


2021 ◽  
Vol 3 (1) ◽  
pp. 35-47
Author(s):  
Lambrini Seremeti ◽  
◽  
Ioannis Kougias ◽  

Nowadays, artificial intelligence entities operate autonomously and they actively participate in everyday social activities. At a macro-perspective, they play the role of mediator between people and their actions, as components of the fundamental structure of every social activity. At a micro-perspective, they can be considered as fixed critical points whose hypostasis is not subject to established legal framework. A key point is that embedding artificial intelligence entities in everyday activities may maximize legal uncertainty both at the macro and micro-level, as well as at the interim phase, i.e., the switch between the two levels. In this paper, we adapt a well-known concept from Category Theory, namely that of the pushout, in order to approximate the core interpretation legal framework of such activities, by considering each level as an open system. The purpose of using Systems Theory in combination with Category Theory is to introduce a mathematical approach to uniquely interpret complex legal social activities and to show that this novel area of artificially enhanced activities is of prime and practical importance and significance to law and computer science practitioners.


Law and World ◽  
2021 ◽  
Vol 7 (5) ◽  
pp. 8-13

In the digital era, technological advances have brought innovative opportunities. Artificial intelligence is a real instrument to provide automatic routine tasks in different fields (healthcare, education, the justice system, foreign and security policies, etc.). AI is evolving very fast. More precisely, robots as re-programmable multi-purpose devices designed for the handling of materials and tools for the processing of parts or specialized devices utilizing varying programmed movements to complete a variety of tasks.1 Regardless of opportunities, artificial intelligence may pose some risks and challenges for us. Because of the nature of AI ethical and legal questions can be pondered especially in terms of protecting human rights. The power of artificial intelligence means using it more effectively in the process of analyzing big data than a human being. On the one hand, it causes loss of traditional jobs and, on the other hand, it promotes the creation of digital equivalents of workers with automatic routine task capabilities. “Artificial intelligence must serve people, and therefore artificial intelligence must always comply with people’s rights,” said Ursula von der Leyen, President of the European Commission.2 The EU has a clear vision of the development of the legal framework for AI. In the light of the above, the article aims to explore the legal aspects of artificial intelligence based on the European experience. Furthermore, it is essential in the context of Georgia’s European integration. Analyzing legal approaches of the EU will promote an approximation of the Georgian legislation to the EU standards in this field. Also, it will facilitate to define AI’s role in the effective digital transformation of public and private sectors in Georgia.


2021 ◽  
Vol 129 ◽  
pp. 05002
Author(s):  
Zanda Davida

Research background: The first notable early chatbots were created in the sixties, but the growing use of artificial intelligence (AI) has powered them significantly. Studies show that basically chatbots are created and used for purposes by government and business, mostly in consumer service and marketing. The new Proposal of the Artificial intelligence act aims to promote the uptake of AI and address the risks associated with certain uses of such technology. However, the act contains only minimum transparency obligation for some specific AL systems such as chatbots. Purpose of the article: In light of this issue, the article aims to discuss how existing European Union (EU) consumer law is equipped to deal with situations in which the use of chatbots can pose the risks of manipulation, aggressive commercial practices, intrusion into privacy, exploitation of a consumer’s vulnerabilities and algorithmic decision making based on biased or discriminatory results. Methods: The article will analyse the legal framework, compare guidance documents and countries’ experiences, study results of different consumer behavior researches and scientific articles. Findings & Value added: The article reveals several gaps in current EU consumer law and discusses the flaws of proposing legislation (particularly the Proposal for an Artificial intelligence act) regarding relations between business and consumers.


2020 ◽  
Vol 13 (3) ◽  
pp. 256
Author(s):  
Roman Dremliuga ◽  
Natalia Prisekina

This article focuses on the problems of the application of AI as a tool of crime from the perspective of the norms and principles of Criminal law. The article discusses the question of how the legal framework in the area of culpability determination could be applied to offenses committed with the use of AI. The article presents an analysis of the current state in the sphere of criminal law for both intentional and negligent offenses as well as a comparative analysis of these two forms of culpability. Part of the work is devoted to culpability in intentional crimes. Results of analysis in the paper demonstrate that the law-enforcer and the legislator should reconsider the approach to determining culpability in the case of the application of artificial intelligence systems for committing intentional crimes. As an artificial intelligence system, in some sense, has its own designed cognition and will, courts could not rely on the traditional concept of culpability in intentional crimes, where the intent is clearly determined in accordance with the actions of the criminal. Criminal negligence is reviewed in the article from the perspective of a developer’s criminal liability. The developer is considered as a person who may influence on and anticipate harm caused by AI system that he/she created. If product developers are free from any form of criminal liability for harm caused by their products, it would lead to highly negative social consequences. The situation when a person developing AI system has to take into consideration all potential harm caused by the product also has negative social consequences. The authors conclude that the balance between these two extremums should be found. The authors conclude that the current legal framework does not conform to the goal of a culpability determination for the crime where AI is a tool.


2020 ◽  
pp. 1-6
Author(s):  
Victor A. Beker

In line with the Fundamental Principals of Official Statistics to produce valid and reliable statistics Governments need to provide the legal framework and resources to the statistical system to allow statisticians to produce the required statistics, without interference, using the best available methodology and techniques from the most suited sources of information. In Latin America and the Caribbean the colonial past affected and still affects the production of statistics. During the Colonial period statistics were of limited scope and use, mostly serving the interests of the Colonial powers. After independence in Latin America statistics became an instrument for development only after World War II, while in the Caribbean the newly independent nations had to adjust the Colonial system to national souvereignity. Conflicts between statistical independence and administrative desire and convenience did occur. Occasionally statisticians were under pressure to modify results to serve administrative or political purposes. An extreme case of Government interference with statistical activities is the case of Argentine since 2007. The gross manipulation of the Consumer Price Index (CPI) that began at that time was aimed at concealing the rise in inflation which took place at the beginning of that year. Statisticians in the National Statistical Office who refused to be part of the forgery were demoted, and dismissed while others resigned. The alteration of the CPI severely affected other statistical indices. Private consultants and researchers were subject to criminal prosecutions and punished with hefty fines for the “crime” of publishing their own price estimates. Although in most cases the judicial system acquitted them, this happened some years later, and currently there are still researchers awaiting the final judgement. In spite of the reaction by public opinion and the world statistical community nothing changed substantially until now. The paper concludes with some recommendations to safeguard the integrity of statistics inspired by this sad experience.


Sign in / Sign up

Export Citation Format

Share Document