Artificial Intelligence and Algorithmic Bias: Source, Detection, Mitigation, and Implications

2020 ◽  
pp. 39-63
Author(s):  
Runshan Fu ◽  
Yan Huang ◽  
Param Vir Singh
Artnodes ◽  
2020 ◽  
Author(s):  
Ruth West ◽  
Andrés Burbano

Explorations of the relationship between Artificial Intelligence (AI), the arts, and design have existed throughout the historical development of AI. We are currently witnessing exponential growth in the application of Machine Learning (ML) and AI in all domains of art (visual, sonic, performing, spatial, transmedia, audiovisual, and narrative) in parallel with activity in the field that is so rapid that publication can not keep pace. In dialogue with our contemplation about this development in the arts, authors in this issue answer with questions of their own. Through questioning authorship and ethics, autonomy and automation, exploring the contribution of art to AI, algorithmic bias, control structures, machine intelligence in public art, formalization of aesthetics, the production of culture, socio-technical dimensions, relationships to games and aesthetics, and democratization of machine-based creative tools the contributors provide a multifaceted view into crucial dimensions of the present and future of creative AI. In this Artnodes special issue, we pose the question: Does generative and machine creativity in the arts and design represent an evolution of “artistic intelligence,” or is it a metamorphosis of creative practice yielding fundamentally distinct forms and modes of authorship?


JAMIA Open ◽  
2020 ◽  
Vol 3 (1) ◽  
pp. 9-15 ◽  
Author(s):  
Colin G Walsh ◽  
Beenish Chaudhry ◽  
Prerna Dua ◽  
Kenneth W Goodman ◽  
Bonnie Kaplan ◽  
...  

Abstract Effective implementation of artificial intelligence in behavioral healthcare delivery depends on overcoming challenges that are pronounced in this domain. Self and social stigma contribute to under-reported symptoms, and under-coding worsens ascertainment. Health disparities contribute to algorithmic bias. Lack of reliable biological and clinical markers hinders model development, and model explainability challenges impede trust among users. In this perspective, we describe these challenges and discuss design and implementation recommendations to overcome them in intelligent systems for behavioral and mental health.


Author(s):  
Tobias Röhl

The introduction of artificial intelligence (AI) and other tools, based on algorithmic decision-making in education, not only provides opportunities but can also lead to ethical problems, such as algorithmic bias and a deskilling of teachers. In this essay I will show how these risks can be mitigated.


Author(s):  
Yu. S. Kharitonova ◽  
◽  
V. S. Savina ◽  
F. Pagnini ◽  
◽  
...  

Introduction: this paper focuses on the legal problems of applying the artificial intelligence technology when solving socio-economic problems. The convergence of two disruptive technologies – Artificial Intelligence (AI) and Data Science – has created a fundamental transformation of social relations in various spheres of human life. A transformational role was played by classical areas of artificial intelligence such as algorithmic logic, planning, knowledge representation, modeling, autonomous systems, multiagent systems, expert systems (ES), decision support systems (DSS), simulation, pattern recognition, image processing, and natural language processing (NLP), as well as by special areas such as representation learning, machine learning, optimization, statistical modeling, mathematical modeling, data analytics, knowledge discovery, complexity science, computational intelligence, event analysis, behavior analysis, social network analysis, and also deep learning and cognitive computing. The mentioned AI and Big Data technologies are used in various business spheres to simplify and accelerate decision-making of different kinds and significance. At the same time, self-learning algorithms create or reproduce inequalities between participants in circulation, lead to discrimination of all kinds due to algorithmic bias. Purpose: to define the areas and directions of legal regulation of algorithmic bias in the application of artificial intelligence from the legal perspective, based on the analysis of Russian and foreign scientific concepts. Methods: empirical methods of comparison, description, interpretation; theoretical methods of formal and dialectical logic; special scientific methods such as the legal-dogmatic method and the method of interpretation of legal norms. Results: artificial intelligence has many advantages (it allows us to improve creativity, services and lifestyle, to enhance the security, helps in solving various problems), but at the same time it causes numerous concerns due to the harmful effects on individual autonomy, privacy, and fundamental human rights and freedoms. Algorithmic bias exists even when the algorithm developer has no intention to discriminate, and even when the recommendation system does not accept demographic information as input: even in the absence of this information, due to thorough analysis of the similarities between products and users, the algorithm may recommend a product to a very homogeneous set of users. The identified problems and risks of AI bias should be taken into consideration by lawyers and developers and should be mitigated to the fullest extent possible, both when developing ethical principles and requirements and in the field of legal policy and law at the national and supranational levels. The legal community sees the opportunity to solve the problem of algorithmic bias through various kinds of declarations, policies, and standards to be followed in the development, testing, and operation of AI systems. Conclusions: if left unaddressed, biased algorithms could lead to decisions that would have a disparate collective impact on specific groups of people even without the programmer’s intent to make a distinction. The study of the anticipated and unintended consequences of applying AI algorithms is especially necessary today because the current public policy may be insufficient to identify, mitigate, and remedy the effects of such non-obvious bias on participants in legal relations. Solving the issues of algorithmic bias by technical means alone will not lead to the desired results. The world community recognizes the need to introduce standardization and develop ethical principles, which would ensure proper decision-making with the application of artificial intelligence. It is necessary to create special rules that would restrict algorithmic bias. Regardless of the areas where such violations are revealed, they have standard features of unfair behavior of the participants in social relations and can be qualified as violations of human rights or fair competition. Minimization of algorithmic bias is possible through the obligatory introduction into circulation of data in the form that would not allow explicit or implicit segregation of various groups of society, i.e. it should become possible to analyze only data without any explicit attributes of groups, data in their full diversity. As a result, the AI model would be built on the analysis of data from all socio-legal groups of society.


2021 ◽  
Vol 10 (1) ◽  
pp. 49-70
Author(s):  
Reskiantio Pabubung, Michael

We are in the age of artificial intelligence (AI). AI is everywhere. We know surely that it has great impacts to human progress especially in healthcare, education, economics, and environment. Our tasks become easier by the help of AI. Unfortunately, besides its enormous benefit, AI can also be a threat to humanity. What kind of the threat and how theology should contribute? This question is analyzed and answered from moral theology point of view by using the method of contextual theology. This essay finds that algorithmic bias in AI system is a threat to humanity especially in the name of human dignity. Pope John Paul II in his Evangelium Vitae (1995) no 3. says, “Every threat to human dignity and life must necessarily be felt in the Church’s very heart”. It is important and urgent to build a theology vis-a-vis AI. Theology cannot escape from AI especially when it encounters human dignity. While looking toward todays’ impacts of AI, analysis on John Paul II’s thougths on human dignity leads to a meaningful point in Fides et Ratio (1998) about cooperation between theologians, philosophers, and scientists which could be realized through dialogue.


Sign in / Sign up

Export Citation Format

Share Document