The ethical dimension of human–artificial intelligence collaboration

European View ◽  
2021 ◽  
pp. 178168582110592
Author(s):  
Michał Boni

The development of artificial intelligence (AI) has accelerated the digital revolution and has had an enormous impact on all aspects of life. Work patterns are starting to change, and cooperation between humans and machines, currently humans and various forms of AI, is becoming crucial. There are advantages and some threats related to these new forms of human–AI collaboration. It is necessary to base this collaboration on ethical principles, ensuring the autonomy of humans over technology. This will create trust, which is indispensable for the fruitful use of AI. This requires an adequate regulatory framework: one that is future proof, anticipates how AI will develop, takes a risk-based approach and implements ex ante assessment as a tool to avoid unintended consequences. Furthermore, we need human oversight of the development of AI, supported by inter-institutional partnerships. But first we need to create the conditions for the development of AI digital literacy.

2021 ◽  
pp. 1-20
Author(s):  
Mitja KOVAC

The issue of super-intelligent artificial intelligence (AI) has begun to attract ever more attention in economics, law, sociology and philosophy studies. A new industrial revolution is being unleashed, and it is vital that lawmakers address the systemic challenges it is bringing while regulating its economic and social consequences. This paper sets out recommendations to ensure informed regulatory intervention covering potential uncontemplated AI-related risks. If AI evolves in ways unintended by its designers, the judgment-proof problem of existing legal persons engaged with AI might undermine the deterrence and insurance goals of classic tort law, which consequently might fail to ensure optimal risk internalisation and precaution. This paper also argues that, due to identified shortcomings, the debate on the different approaches to controlling hazardous activities boils down to a question of efficient ex ante safety regulation. In addition, it is suggested that it is better to place AI in the existing legal categories and not to create a new electronic legal personality.


Author(s):  
Yu. S. Kharitonova ◽  
◽  
V. S. Savina ◽  
F. Pagnini ◽  
◽  
...  

Introduction: this paper focuses on the legal problems of applying the artificial intelligence technology when solving socio-economic problems. The convergence of two disruptive technologies – Artificial Intelligence (AI) and Data Science – has created a fundamental transformation of social relations in various spheres of human life. A transformational role was played by classical areas of artificial intelligence such as algorithmic logic, planning, knowledge representation, modeling, autonomous systems, multiagent systems, expert systems (ES), decision support systems (DSS), simulation, pattern recognition, image processing, and natural language processing (NLP), as well as by special areas such as representation learning, machine learning, optimization, statistical modeling, mathematical modeling, data analytics, knowledge discovery, complexity science, computational intelligence, event analysis, behavior analysis, social network analysis, and also deep learning and cognitive computing. The mentioned AI and Big Data technologies are used in various business spheres to simplify and accelerate decision-making of different kinds and significance. At the same time, self-learning algorithms create or reproduce inequalities between participants in circulation, lead to discrimination of all kinds due to algorithmic bias. Purpose: to define the areas and directions of legal regulation of algorithmic bias in the application of artificial intelligence from the legal perspective, based on the analysis of Russian and foreign scientific concepts. Methods: empirical methods of comparison, description, interpretation; theoretical methods of formal and dialectical logic; special scientific methods such as the legal-dogmatic method and the method of interpretation of legal norms. Results: artificial intelligence has many advantages (it allows us to improve creativity, services and lifestyle, to enhance the security, helps in solving various problems), but at the same time it causes numerous concerns due to the harmful effects on individual autonomy, privacy, and fundamental human rights and freedoms. Algorithmic bias exists even when the algorithm developer has no intention to discriminate, and even when the recommendation system does not accept demographic information as input: even in the absence of this information, due to thorough analysis of the similarities between products and users, the algorithm may recommend a product to a very homogeneous set of users. The identified problems and risks of AI bias should be taken into consideration by lawyers and developers and should be mitigated to the fullest extent possible, both when developing ethical principles and requirements and in the field of legal policy and law at the national and supranational levels. The legal community sees the opportunity to solve the problem of algorithmic bias through various kinds of declarations, policies, and standards to be followed in the development, testing, and operation of AI systems. Conclusions: if left unaddressed, biased algorithms could lead to decisions that would have a disparate collective impact on specific groups of people even without the programmer’s intent to make a distinction. The study of the anticipated and unintended consequences of applying AI algorithms is especially necessary today because the current public policy may be insufficient to identify, mitigate, and remedy the effects of such non-obvious bias on participants in legal relations. Solving the issues of algorithmic bias by technical means alone will not lead to the desired results. The world community recognizes the need to introduce standardization and develop ethical principles, which would ensure proper decision-making with the application of artificial intelligence. It is necessary to create special rules that would restrict algorithmic bias. Regardless of the areas where such violations are revealed, they have standard features of unfair behavior of the participants in social relations and can be qualified as violations of human rights or fair competition. Minimization of algorithmic bias is possible through the obligatory introduction into circulation of data in the form that would not allow explicit or implicit segregation of various groups of society, i.e. it should become possible to analyze only data without any explicit attributes of groups, data in their full diversity. As a result, the AI model would be built on the analysis of data from all socio-legal groups of society.


2020 ◽  
Vol 2 ◽  
pp. 53-73
Author(s):  
Sebastian Gałecki

Although the “frame problem” in philosophy has been raised in the context of the artificial intelligence, it is only an exemplification of broader problem. It seems that contemporary ethical debates are not so much about conclusions, decisions, norms, but rather about what we might call a “frame”. Metaethics has always been the bridge between purely ethical principles (“this is good and it should be done”, “this is wrong and it should be avoided”) and broader (ontological, epistemic, anthropological etc.) assumptions. One of the most interesting meta-ethical debates concerns the “frame problem”: whether the ethical frame is objective and self-evident, or is it objective but not self-evident? In classical philosophy, this problem takes the form of a debate on the first principles: nonprovable but necessary starting points for any practical reasoning. They constitute the invisible but essential frame of every moral judgment, decision and action. The role of philosophy is not only to expose these principles, but also to understand the nature of the moral frame.


2018 ◽  
Vol 20 (3) ◽  
pp. 67-72
Author(s):  
Colin Andrew Ford

This article reports on the issue of confidentiality faced by a community youth agency that provides access to digital technology for homeless or street-involved youth. Social media is the prevalent form of communication in displaced communities and presents certain ethical challenges as a result of creating and sharing media with potential unintended audiences. Ensuring ethical practices is a key aspect of the ongoing process of developing digital literacy that changes as technology evolves. It requires the facilitator’s focused attention to guide the youth in their ability to consider their digital footprint and potential unintended consequences of their practices.


Author(s):  
Chris Reed

Using artificial intelligence (AI) technology to replace human decision-making will inevitably create new risks whose consequences are unforeseeable. This naturally leads to calls for regulation, but I argue that it is too early to attempt a general system of AI regulation. Instead, we should work incrementally within the existing legal and regulatory schemes which allocate responsibility, and therefore liability, to persons. Where AI clearly creates risks which current law and regulation cannot deal with adequately, then new regulation will be needed. But in most cases, the current system can work effectively if the producers of AI technology can provide sufficient transparency in explaining how AI decisions are made. Transparency ex post can often be achieved through retrospective analysis of the technology's operations, and will be sufficient if the main goal is to compensate victims of incorrect decisions. Ex ante transparency is more challenging, and can limit the use of some AI technologies such as neural networks. It should only be demanded by regulation where the AI presents risks to fundamental rights, or where society needs reassuring that the technology can safely be used. Masterly inactivity in regulation is likely to achieve a better long-term solution than a rush to regulate in ignorance. This article is part of a discussion meeting issue ‘The growing ubiquity of algorithms in society: implications, impacts and innovations'.


Author(s):  
Andriyana Andreeva ◽  
Galina Yolova

The study analyzes the influence of artificial intelligence on labor relations and the related need to adapt to the legal institute of liability in labor law with the new social realities. The sources at European level are studied and the current aspects of liability in the labor law at a national level are analyzed. Based on the analysis, the challenges are outlined and the trends for the doctrine, the European community, and the legislation for the introduction of a regulatory framework are identified.


Sign in / Sign up

Export Citation Format

Share Document