scholarly journals MODERN STAGE OF ARTIFICIAL INTELLIGENCE (AI) DEVELOPMENT AND APPLICATION OF AI METHODS AND SYSTEMS IN POWER ENGINEERING

Author(s):  
Людмила Васильевна Массель

В статье анализируется ряд публикаций на эту тему, а также обобщаются результаты дискуссий на конференции «Знания, онтологии, теории» (Новосибирск, 8-12 ноября 2021 г.) и Круглом столе в ИСЭМ СО РАН «Искусственный интеллект в энергетике» (22 декабря 2021 г.). Рассматриваются понятия: сильный и слабый ИИ, объяснимый ИИ, доверенный ИИ. Анализируются причины «бума» вокруг машинного обучения и его недостатки. Сравниваются облачные технологии и технологии граничных вычислений. Определяется понятие «умный» цифровой двойник, интегрирующий математические, информационные, онтологические модели и технологии ИИ. Рассматриваются этические риски ИИ и перспективы применения методов и технологий ИИ в энергетике. The article analyzes a number of publications on this topic, and also summarizes the results of discussions at the conference "Knowledge, Ontology, Theory" (Novosibirsk, November 8-12, 2021) and the Round Table at the ISEM SB RAS "Artificial Intelligence in Energy" (December 22 2021). The concepts are considered: artificial general intelligence (AGI), strong and narrow AI (NAI), explainable AI, trustworthy AI. The reasons for the "hype" around machine learning and its disadvantages are analyzed. Compares cloud and edge computing technologies. The concept of "smart" digital twin, which integrates mathematical, informational, ontological models and AI technologies, is defined. The ethical risks of AI and the prospects for the application of AI methods and technologies in the energy sector are considered.

Author(s):  
Joseph Nyangon

The Paris Agreement on climate change requires nations to keep the global temperature within the 2°C carbon budget. Achieving this temperature target means stranding more than 80% of all proven fossil energy reserves as well as resulting in investments in such resources becoming stranded assets. At the implementation level, governments are experiencing technical, economic, and legal challenges in transitioning their economies to meet the 2°C temperature commitment through the nationally determined contributions (NDCs), let alone striving for the 1.5°C carbon budget, which translates into greenhouse gas emissions (GHG) gap. This chapter focuses on tackling the risks of stranded electricity assets using machine learning and artificial intelligence technologies. Stranded assets are not new in the energy sector; the physical impacts of climate change and the transition to a low-carbon economy have generally rendered redundant or obsolete electricity generation and storage assets. Low-carbon electricity systems, which come in variable and controllable forms, are essential to mitigating climate change. These systems present distinct opportunities for machine learning and artificial intelligence-powered techniques. This chapter considers the background to these issues. It discusses the asset stranding discourse and its implications to the energy sector and related infrastructure. The chapter concludes by outlining an interdisciplinary research agenda for mitigating the risks of stranded assets in electricity investments.


2021 ◽  
Vol 4 ◽  
Author(s):  
Lindsay Wells ◽  
Tomasz Bednarz

Research into Explainable Artificial Intelligence (XAI) has been increasing in recent years as a response to the need for increased transparency and trust in AI. This is particularly important as AI is used in sensitive domains with societal, ethical, and safety implications. Work in XAI has primarily focused on Machine Learning (ML) for classification, decision, or action, with detailed systematic reviews already undertaken. This review looks to explore current approaches and limitations for XAI in the area of Reinforcement Learning (RL). From 520 search results, 25 studies (including 5 snowball sampled) are reviewed, highlighting visualization, query-based explanations, policy summarization, human-in-the-loop collaboration, and verification as trends in this area. Limitations in the studies are presented, particularly a lack of user studies, and the prevalence of toy-examples and difficulties providing understandable explanations. Areas for future study are identified, including immersive visualization, and symbolic representation.


2020 ◽  
Vol 8 ◽  
pp. 61-72
Author(s):  
Kara Combs ◽  
Mary Fendley ◽  
Trevor Bihl

Artificial Intelligence and Machine Learning (AI/ML) models are increasingly criticized for their “black-box” nature. Therefore, eXplainable AI (XAI) approaches to extract human-interpretable decision processes from algorithms have been explored. However, XAI research lacks understanding of algorithmic explainability from a human factors’ perspective. This paper presents a repeatable human factors heuristic analysis for XAI with a demonstration on four decision tree classifier algorithms.


Information ◽  
2018 ◽  
Vol 9 (12) ◽  
pp. 332 ◽  
Author(s):  
Paul Walton

Artificial intelligence (AI) and machine learning promise to make major changes to the relationship of people and organizations with technology and information. However, as with any form of information processing, they are subject to the limitations of information linked to the way in which information evolves in information ecosystems. These limitations are caused by the combinatorial challenges associated with information processing, and by the tradeoffs driven by selection pressures. Analysis of the limitations explains some current difficulties with AI and machine learning and identifies the principles required to resolve the limitations when implementing AI and machine learning in organizations. Applying the same type of analysis to artificial general intelligence (AGI) highlights some key theoretical difficulties and gives some indications about the challenges of resolving them.


Author(s):  
Alexey Ignatiev

Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for validating explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This paper overviews the advances of the rigorous logic-based approach to XAI and argues that it is indispensable if trustable XAI is of concern.


2019 ◽  
Author(s):  
Сергей Шумский ◽  
Sergey Shumskiy

This book is about the nature of mind, both human and artificial, from the standpoint of the theory of machine learning. It addresses the problem of creating artificial general intelligence. The author shows how one can use the basic mechanisms of our brain to create artificial brains of future robots. How will this ever-stronger artificial intelligence fit into our lives? What awaits us in the next 10-15 years? How can someone who wants to take part in a new scientific revolution, participate in developing a new science of mind?


Author(s):  
Deepak Saxena ◽  
Markus Lamest ◽  
Veena Bansal

Artificial intelligence (AI) systems have become a new reality of modern life. They have become ubiquitous to virtually all socio-economic activities in business and industry. With the extent of AI's influence on our lives, it is an imperative to focus our attention on the ethics of AI. While humans develop their moral and ethical framework via self-awareness and reflection, the current generation of AI lacks these abilities. Drawing from the concept of human-AI hybrid, this chapter offers managerial and developers' action towards responsible machine learning for ethical artificial intelligence. The actions consist of privacy by design, development of explainable AI, identification and removal of inherent biases, and most importantly, using AI as a moral enabler. Application of these action would not only help towards ethical AI; it would also help in supporting moral development of human-AI hybrid.


2021 ◽  
Vol 6 (22) ◽  
pp. 36-50
Author(s):  
Ali Hassan ◽  
Riza Sulaiman ◽  
Mansoor Abdullateef Abdulgabber ◽  
Hasan Kahtan

Recent advances in artificial intelligence, particularly in the field of machine learning (ML), have shown that these models can be incredibly successful, producing encouraging results and leading to diverse applications. Despite the promise of artificial intelligence, without transparency of machine learning models, it is difficult for stakeholders to trust the results of such models, which can hinder successful adoption. This concern has sparked scientific interest and led to the development of transparency-supporting algorithms. Although studies have raised awareness of the need for explainable AI, the question of how to meet real users' needs for understanding AI remains unresolved. This study provides a review of the literature on human-centric Machine Learning and new approaches to user-centric explanations for deep learning models. We highlight the challenges and opportunities facing this area of research. The goal is for this review to serve as a resource for both researchers and practitioners. The study found that one of the most difficult aspects of implementing machine learning models is gaining the trust of end-users.


Sign in / Sign up

Export Citation Format

Share Document