Responsible Machine Learning for Ethical Artificial Intelligence in Business and Industry

Author(s):  
Deepak Saxena ◽  
Markus Lamest ◽  
Veena Bansal

Artificial intelligence (AI) systems have become a new reality of modern life. They have become ubiquitous to virtually all socio-economic activities in business and industry. With the extent of AI's influence on our lives, it is an imperative to focus our attention on the ethics of AI. While humans develop their moral and ethical framework via self-awareness and reflection, the current generation of AI lacks these abilities. Drawing from the concept of human-AI hybrid, this chapter offers managerial and developers' action towards responsible machine learning for ethical artificial intelligence. The actions consist of privacy by design, development of explainable AI, identification and removal of inherent biases, and most importantly, using AI as a moral enabler. Application of these action would not only help towards ethical AI; it would also help in supporting moral development of human-AI hybrid.

2020 ◽  
Vol 26 (5) ◽  
pp. 2867-2891 ◽  
Author(s):  
Dylan Cawthorne ◽  
Aimee Robbins-van Wynsberghe

Abstract The use of drones in public healthcare is suggested as a means to improve efficiency under constrained resources and personnel. This paper begins by framing drones in healthcare as a social experiment where ethical guidelines are needed to protect those impacted while fully realizing the benefits the technology offers. Then we propose an ethical framework to facilitate the design, development, implementation, and assessment of drones used in public healthcare. Given the healthcare context, we structure the framework according to the four bioethics principles: beneficence, non-maleficence, autonomy, and justice, plus a fifth principle from artificial intelligence ethics: explicability. These principles are abstract which makes operationalization a challenge; therefore, we suggest an approach of translation according to a values hierarchy whereby the top-level ethical principles are translated into relevant human values within the domain. The resulting framework is an applied ethics tool that facilitates awareness of relevant ethical issues during the design, development, implementation, and assessment of drones in public healthcare.


2018 ◽  
Vol 62 ◽  
pp. 729-754 ◽  
Author(s):  
Katja Grace ◽  
John Salvatier ◽  
Allan Dafoe ◽  
Baobao Zhang ◽  
Owain Evans

Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI. This article is part of the special track on AI and Society.


2021 ◽  
Vol 4 ◽  
Author(s):  
Lindsay Wells ◽  
Tomasz Bednarz

Research into Explainable Artificial Intelligence (XAI) has been increasing in recent years as a response to the need for increased transparency and trust in AI. This is particularly important as AI is used in sensitive domains with societal, ethical, and safety implications. Work in XAI has primarily focused on Machine Learning (ML) for classification, decision, or action, with detailed systematic reviews already undertaken. This review looks to explore current approaches and limitations for XAI in the area of Reinforcement Learning (RL). From 520 search results, 25 studies (including 5 snowball sampled) are reviewed, highlighting visualization, query-based explanations, policy summarization, human-in-the-loop collaboration, and verification as trends in this area. Limitations in the studies are presented, particularly a lack of user studies, and the prevalence of toy-examples and difficulties providing understandable explanations. Areas for future study are identified, including immersive visualization, and symbolic representation.


2020 ◽  
Vol 8 ◽  
pp. 61-72
Author(s):  
Kara Combs ◽  
Mary Fendley ◽  
Trevor Bihl

Artificial Intelligence and Machine Learning (AI/ML) models are increasingly criticized for their “black-box” nature. Therefore, eXplainable AI (XAI) approaches to extract human-interpretable decision processes from algorithms have been explored. However, XAI research lacks understanding of algorithmic explainability from a human factors’ perspective. This paper presents a repeatable human factors heuristic analysis for XAI with a demonstration on four decision tree classifier algorithms.


Author(s):  
Paula C. Arias

Artificial Intelligence and Machine Learning are a result not only of technological advances but also of the exploitation of information or data, which has led to its expansion into almost all aspects of modern life, including law and its practice. Due to the benefits of these technologies, such as efficiency, objectivity, and transparency, the trend is towards the integration of Artificial Intelligence and Machine Learning in the judicial system. Integration that is advocated at all levels and, today, has been achieved mostly under the implementation of tools to assist the exercise of the judiciary. The "success" of this integration has led to the creation of an automated court or an artificially intelligent judge as a futuristic proposal.


Author(s):  
Людмила Васильевна Массель

В статье анализируется ряд публикаций на эту тему, а также обобщаются результаты дискуссий на конференции «Знания, онтологии, теории» (Новосибирск, 8-12 ноября 2021 г.) и Круглом столе в ИСЭМ СО РАН «Искусственный интеллект в энергетике» (22 декабря 2021 г.). Рассматриваются понятия: сильный и слабый ИИ, объяснимый ИИ, доверенный ИИ. Анализируются причины «бума» вокруг машинного обучения и его недостатки. Сравниваются облачные технологии и технологии граничных вычислений. Определяется понятие «умный» цифровой двойник, интегрирующий математические, информационные, онтологические модели и технологии ИИ. Рассматриваются этические риски ИИ и перспективы применения методов и технологий ИИ в энергетике. The article analyzes a number of publications on this topic, and also summarizes the results of discussions at the conference "Knowledge, Ontology, Theory" (Novosibirsk, November 8-12, 2021) and the Round Table at the ISEM SB RAS "Artificial Intelligence in Energy" (December 22 2021). The concepts are considered: artificial general intelligence (AGI), strong and narrow AI (NAI), explainable AI, trustworthy AI. The reasons for the "hype" around machine learning and its disadvantages are analyzed. Compares cloud and edge computing technologies. The concept of "smart" digital twin, which integrates mathematical, informational, ontological models and AI technologies, is defined. The ethical risks of AI and the prospects for the application of AI methods and technologies in the energy sector are considered.


2021 ◽  
Vol 8 (2) ◽  
pp. 1-2
Author(s):  
Julkar Nine

Vision Based systems have become an integral part when it comes to autonomous driving. The autonomous industry has seen a made large progress in the perception of environment as a result of the improvements done towards vision based systems. As the industry moves up the ladder of automation, safety features are coming more and more into the focus. Different safety measurements have to be taken into consideration based on different driving situations. One of the major concerns of the highest level of autonomy is to obtain the ability of understanding both internal and external situations. Most of the research made on vision based systems are focused on image processing and artificial intelligence systems like machine learning and deep learning. Due to the current generation of technology being the generation of “Connected World”, there is no lack of data any more. As a result of the introduction of internet of things, most of these connected devices are able to share and transfer data. Vision based techniques are techniques that are hugely depended on these vision based data.


Author(s):  
Alexey Ignatiev

Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for validating explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This paper overviews the advances of the rigorous logic-based approach to XAI and argues that it is indispensable if trustable XAI is of concern.


Author(s):  
L. Kuladeep Kumar

Since the outbreak of the novel SARS-CoV-2, machine learning and artificial intelligence (ML/AI) have become the powerful marketing tools to mitigate economic activities during COVID-19 pandemic. The goal of ML/AI technology is to provide data and insights so that brands can understand what’s working and what’s not. This will help marketers understand and anticipate what sort of communications work and how to deliver them. Therefore, these are such promising methods employed by various marketing providers. AI uses machine learning to adapt and make changes which impact marketing in real time. The exact impact of events such as the COVID-19 pandemic is hard to predict, but AI will help us track and anticipate these circumstances, as well as provide us with the data needed to proceed. This chapter deals with recent studies that use such advanced technology to increase researchers from different perspectives, address problems and challenges by using such an algorithm to assist marketing experts in real-world issues. This chapter also discusses suggestions conveying researchers on ML/AI-based model design, marketing experts, and policymakers on few errors encountered in the current situation while tackling the current pandemic.


2021 ◽  
Vol 6 (22) ◽  
pp. 36-50
Author(s):  
Ali Hassan ◽  
Riza Sulaiman ◽  
Mansoor Abdullateef Abdulgabber ◽  
Hasan Kahtan

Recent advances in artificial intelligence, particularly in the field of machine learning (ML), have shown that these models can be incredibly successful, producing encouraging results and leading to diverse applications. Despite the promise of artificial intelligence, without transparency of machine learning models, it is difficult for stakeholders to trust the results of such models, which can hinder successful adoption. This concern has sparked scientific interest and led to the development of transparency-supporting algorithms. Although studies have raised awareness of the need for explainable AI, the question of how to meet real users' needs for understanding AI remains unresolved. This study provides a review of the literature on human-centric Machine Learning and new approaches to user-centric explanations for deep learning models. We highlight the challenges and opportunities facing this area of research. The goal is for this review to serve as a resource for both researchers and practitioners. The study found that one of the most difficult aspects of implementing machine learning models is gaining the trust of end-users.


Sign in / Sign up

Export Citation Format

Share Document