Taming Algorithms

Author(s):  
Tobias Röhl

The introduction of artificial intelligence (AI) and other tools, based on algorithmic decision-making in education, not only provides opportunities but can also lead to ethical problems, such as algorithmic bias and a deskilling of teachers. In this essay I will show how these risks can be mitigated.

Author(s):  
Yu. S. Kharitonova ◽  
◽  
V. S. Savina ◽  
F. Pagnini ◽  
◽  
...  

Introduction: this paper focuses on the legal problems of applying the artificial intelligence technology when solving socio-economic problems. The convergence of two disruptive technologies – Artificial Intelligence (AI) and Data Science – has created a fundamental transformation of social relations in various spheres of human life. A transformational role was played by classical areas of artificial intelligence such as algorithmic logic, planning, knowledge representation, modeling, autonomous systems, multiagent systems, expert systems (ES), decision support systems (DSS), simulation, pattern recognition, image processing, and natural language processing (NLP), as well as by special areas such as representation learning, machine learning, optimization, statistical modeling, mathematical modeling, data analytics, knowledge discovery, complexity science, computational intelligence, event analysis, behavior analysis, social network analysis, and also deep learning and cognitive computing. The mentioned AI and Big Data technologies are used in various business spheres to simplify and accelerate decision-making of different kinds and significance. At the same time, self-learning algorithms create or reproduce inequalities between participants in circulation, lead to discrimination of all kinds due to algorithmic bias. Purpose: to define the areas and directions of legal regulation of algorithmic bias in the application of artificial intelligence from the legal perspective, based on the analysis of Russian and foreign scientific concepts. Methods: empirical methods of comparison, description, interpretation; theoretical methods of formal and dialectical logic; special scientific methods such as the legal-dogmatic method and the method of interpretation of legal norms. Results: artificial intelligence has many advantages (it allows us to improve creativity, services and lifestyle, to enhance the security, helps in solving various problems), but at the same time it causes numerous concerns due to the harmful effects on individual autonomy, privacy, and fundamental human rights and freedoms. Algorithmic bias exists even when the algorithm developer has no intention to discriminate, and even when the recommendation system does not accept demographic information as input: even in the absence of this information, due to thorough analysis of the similarities between products and users, the algorithm may recommend a product to a very homogeneous set of users. The identified problems and risks of AI bias should be taken into consideration by lawyers and developers and should be mitigated to the fullest extent possible, both when developing ethical principles and requirements and in the field of legal policy and law at the national and supranational levels. The legal community sees the opportunity to solve the problem of algorithmic bias through various kinds of declarations, policies, and standards to be followed in the development, testing, and operation of AI systems. Conclusions: if left unaddressed, biased algorithms could lead to decisions that would have a disparate collective impact on specific groups of people even without the programmer’s intent to make a distinction. The study of the anticipated and unintended consequences of applying AI algorithms is especially necessary today because the current public policy may be insufficient to identify, mitigate, and remedy the effects of such non-obvious bias on participants in legal relations. Solving the issues of algorithmic bias by technical means alone will not lead to the desired results. The world community recognizes the need to introduce standardization and develop ethical principles, which would ensure proper decision-making with the application of artificial intelligence. It is necessary to create special rules that would restrict algorithmic bias. Regardless of the areas where such violations are revealed, they have standard features of unfair behavior of the participants in social relations and can be qualified as violations of human rights or fair competition. Minimization of algorithmic bias is possible through the obligatory introduction into circulation of data in the form that would not allow explicit or implicit segregation of various groups of society, i.e. it should become possible to analyze only data without any explicit attributes of groups, data in their full diversity. As a result, the AI model would be built on the analysis of data from all socio-legal groups of society.


2020 ◽  
Author(s):  
Avishek Choudhury

UNSTRUCTURED Objective: The potential benefits of artificial intelligence based decision support system (AI-DSS) from a theoretical perspective are well documented and perceived by researchers but there is a lack of evidence showing its influence on routine clinical practice and how its perceived by care providers. Since the effectiveness of AI systems depends on data quality, implementation, and interpretation. The purpose of this literature review is to analyze the effectiveness of AI-DSS in clinical setting and understand its influence on clinician’s decision making outcome. Materials and Methods: This review protocol follows the Preferred Reporting Items for Systematic Reviews and Meta- Analyses reporting guidelines. Literature will be identified using a multi-database search strategy developed in consultation with a librarian. The proposed screening process consists of a title and abstract scan, followed by a full-text review by two reviewers to determine the eligibility of articles. Studies outlining application of AI based decision support system in a clinical setting and its impact on clinician’s decision making, will be included. A tabular synthesis of the general study details will be provided, as well as a narrative synthesis of the extracted data, organised into themes. Studies solely reporting AI accuracy an but not implemented in a clinical setting to measure its influence on clinical decision making were excluded from further review. Results: We identified 8 eligible studies that implemented AI-DSS in a clinical setting to facilitate decisions concerning prostate cancer, post traumatic stress disorder, cardiac ailment, back pain, and others. Five (62.50%) out of 8 studies reported positive outcome of AI-DSS. Conclusion: The systematic review indicated that AI-enabled decision support systems, when implemented in a clinical setting and used by clinicians might not ensure enhanced decision making. However, there are very limited studies to confirm the claim that AI based decision support system can uplift clinicians decision making abilities.


2020 ◽  
Vol 34 (10) ◽  
pp. 13849-13850
Author(s):  
Donghyeon Lee ◽  
Man-Je Kim ◽  
Chang Wook Ahn

In a real-time strategy (RTS) game, StarCraft II, players need to know the consequences before making a decision in combat. We propose a combat outcome predictor which utilizes terrain information as well as squad information. For training the model, we generated a StarCraft II combat dataset by simulating diverse and large-scale combat situations. The overall accuracy of our model was 89.7%. Our predictor can be integrated into the artificial intelligence agent for RTS games as a short-term decision-making module.


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


2021 ◽  
Vol 13 (11) ◽  
pp. 6038
Author(s):  
Sergio Alonso ◽  
Rosana Montes ◽  
Daniel Molina ◽  
Iván Palomares ◽  
Eugenio Martínez-Cámara ◽  
...  

The United Nations Agenda 2030 established 17 Sustainable Development Goals (SDGs) as a guideline to guarantee a sustainable worldwide development. Recent advances in artificial intelligence and other digital technologies have already changed several areas of modern society, and they could be very useful to reach these sustainable goals. In this paper we propose a novel decision making model based on surveys that ranks recommendations on the use of different artificial intelligence and related technologies to achieve the SDGs. According to the surveys, our decision making method is able to determine which of these technologies are worth investing in to lead new research to successfully tackle with sustainability challenges.


Philosophies ◽  
2019 ◽  
Vol 4 (2) ◽  
pp. 24
Author(s):  
Steven Umbrello ◽  
Stefan Lorenz Sorgner

Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition in her analysis of the human cognition-consciousness connection is discussed in relation to how nonconscious cognition can be envisioned and exacerbated in embodied AI. Similarly, this paper offers a way of understanding the concept of suffering in a way that is different than the conventional sense of attributing it to either a purely physical state or a conscious state, instead of grounding at least a type of suffering in this form of cognition.


Sign in / Sign up

Export Citation Format

Share Document