explainable ai
Recently Published Documents


TOTAL DOCUMENTS

397
(FIVE YEARS 378)

H-INDEX

10
(FIVE YEARS 8)

Author(s):  
Людмила Васильевна Массель

В статье анализируется ряд публикаций на эту тему, а также обобщаются результаты дискуссий на конференции «Знания, онтологии, теории» (Новосибирск, 8-12 ноября 2021 г.) и Круглом столе в ИСЭМ СО РАН «Искусственный интеллект в энергетике» (22 декабря 2021 г.). Рассматриваются понятия: сильный и слабый ИИ, объяснимый ИИ, доверенный ИИ. Анализируются причины «бума» вокруг машинного обучения и его недостатки. Сравниваются облачные технологии и технологии граничных вычислений. Определяется понятие «умный» цифровой двойник, интегрирующий математические, информационные, онтологические модели и технологии ИИ. Рассматриваются этические риски ИИ и перспективы применения методов и технологий ИИ в энергетике. The article analyzes a number of publications on this topic, and also summarizes the results of discussions at the conference "Knowledge, Ontology, Theory" (Novosibirsk, November 8-12, 2021) and the Round Table at the ISEM SB RAS "Artificial Intelligence in Energy" (December 22 2021). The concepts are considered: artificial general intelligence (AGI), strong and narrow AI (NAI), explainable AI, trustworthy AI. The reasons for the "hype" around machine learning and its disadvantages are analyzed. Compares cloud and edge computing technologies. The concept of "smart" digital twin, which integrates mathematical, informational, ontological models and AI technologies, is defined. The ethical risks of AI and the prospects for the application of AI methods and technologies in the energy sector are considered.


2022 ◽  
Author(s):  
Tahmina Zebin ◽  
Shahadate Rezvy, ◽  
Yuan Luo

Over the past few years, Domain Name Service (DNS) remained a prime target for hackers as it enables them to gain first entry into networks and gain access to data for exfiltration. Although the DNS over HTTPS (DoH) protocol has desirable properties for internet users such as privacy and security, it also causes a problem in that network administrators are prevented from detecting suspicious network traffic generated by malware and malicious tools. To support their efforts in maintaining a secure network, in this paper, we have implemented an explainable AI solution using a novel machine learning framework. We have used the publicly available CIRA-CIC-DoHBrw-2020 dataset for developing an accurate solution to detect and classify the DNS over HTTPS attacks. Our proposed balanced and stacked Random Forest achieved very high precision (99.91\%), recall (99.92\%) and F1 score (99.91\%) for the classification task at hand. Using explainable AI methods, we have additionally highlighted the underlying feature contributions in an attempt to provide transparent and explainable results from the model.


2022 ◽  
Author(s):  
Tahmina Zebin ◽  
Shahadate Rezvy, ◽  
Yuan Luo

Over the past few years, Domain Name Service (DNS) remained a prime target for hackers as it enables them to gain first entry into networks and gain access to data for exfiltration. Although the DNS over HTTPS (DoH) protocol has desirable properties for internet users such as privacy and security, it also causes a problem in that network administrators are prevented from detecting suspicious network traffic generated by malware and malicious tools. To support their efforts in maintaining a secure network, in this paper, we have implemented an explainable AI solution using a novel machine learning framework. We have used the publicly available CIRA-CIC-DoHBrw-2020 dataset for developing an accurate solution to detect and classify the DNS over HTTPS attacks. Our proposed balanced and stacked Random Forest achieved very high precision (99.91\%), recall (99.92\%) and F1 score (99.91\%) for the classification task at hand. Using explainable AI methods, we have additionally highlighted the underlying feature contributions in an attempt to provide transparent and explainable results from the model.


Author(s):  
Joaquín Borrego-Díaz ◽  
Juan Galán Páez

AbstractAlongside the particular need to explain the behavior of black box artificial intelligence (AI) systems, there is a general need to explain the behavior of any type of AI-based system (the explainable AI, XAI) or complex system that integrates this type of technology, due to the importance of its economic, political or industrial rights impact. The unstoppable development of AI-based applications in sensitive areas has led to what could be seen, from a formal and philosophical point of view, as some sort of crisis in the foundations, for which it is necessary both to provide models of the fundamentals of explainability as well as to discuss the advantages and disadvantages of different proposals. The need for foundations is also linked to the permanent challenge that the notion of explainability represents in Philosophy of Science. The paper aims to elaborate a general theoretical framework to discuss foundational characteristics of explaining, as well as how solutions (events) would be justified (explained). The approach, epistemological in nature, is based on the phenomenological-based approach to complex systems reconstruction (which encompasses complex AI-based systems). The formalized perspective is close to ideas from argumentation and induction (as learning). The soundness and limitations of the approach are addressed from Knowledge representation and reasoning paradigm and, in particular, from Computational Logic point of view. With regard to the latter, the proposal is intertwined with several related notions of explanation coming from the Philosophy of Science.


2022 ◽  
Vol 9 (1) ◽  
pp. 205395172110696
Author(s):  
Pascal D König ◽  
Stefan Wurster ◽  
Markus B Siewert

A major challenge with the increasing use of Artificial Intelligence (AI) applications is to manage the long-term societal impacts of this technology. Two central concerns that have emerged in this respect are that the optimized goals behind the data processing of AI applications usually remain opaque and the energy footprint of their data processing is growing quickly. This study thus explores how much people value the transparency and environmental sustainability of AI using the example of personal AI assistants. The results from a choice-based conjoint analysis with a sample of more than 1.000 respondents from Germany indicate that people hardly care about the energy efficiency of AI; and while they do value transparency through explainable AI, this added value of an application is offset by minor costs. The findings shed light on what kinds of AI people are likely to demand and have important implications for policy and regulation.


Sign in / Sign up

Export Citation Format

Share Document