Explainable Artificial Intelligence For Crypto Asset Allocation

2021 ◽  
Author(s):  
Golnoosh Babaei ◽  
Paolo Giudici
2021 ◽  
pp. 155005942110636
Author(s):  
Francesco Carlo Morabito ◽  
Cosimo Ieracitano ◽  
Nadia Mammone

An explainable Artificial Intelligence (xAI) approach is proposed to longitudinally monitor subjects affected by Mild Cognitive Impairment (MCI) by using high-density electroencephalography (HD-EEG). To this end, a group of MCI patients was enrolled at IRCCS Centro Neurolesi Bonino Pulejo of Messina (Italy) within a follow-up protocol that included two evaluations steps: T0 (first evaluation) and T1 (three months later). At T1, four MCI patients resulted converted to Alzheimer’s Disease (AD) and were included in the analysis as the goal of this work was to use xAI to detect individual changes in EEGs possibly related to the degeneration from MCI to AD. The proposed methodology consists in mapping segments of HD-EEG into channel-frequency maps by means of the power spectral density. Such maps are used as input to a Convolutional Neural Network (CNN), trained to label the maps as “T0” (MCI state) or “T1” (AD state). Experimental results reported high intra-subject classification performance (accuracy rate up to 98.97% (95% confidence interval: 98.68–99.26)). Subsequently, the explainability of the proposed CNN is explored via a Grad-CAM approach. The procedure allowed to detect which EEG-channels (i.e., head region) and range of frequencies (i.e., sub-bands) resulted more active in the progression to AD. The xAI analysis showed that the main information is included in the delta sub-band and that, limited to the analyzed dataset, the highest relevant areas are: the left-temporal and central-frontal lobe for Sb01, the parietal lobe for Sb02, the left-frontal lobe for Sb03 and the left-frontotemporal region for Sb04.


2021 ◽  
Author(s):  
J. Eric T. Taylor ◽  
Graham Taylor

Artificial intelligence powered by deep neural networks has reached a levelof complexity where it can be difficult or impossible to express how a modelmakes its decisions. This black-box problem is especially concerning when themodel makes decisions with consequences for human well-being. In response,an emerging field called explainable artificial intelligence (XAI) aims to increasethe interpretability, fairness, and transparency of machine learning. In thispaper, we describe how cognitive psychologists can make contributions to XAI.The human mind is also a black box, and cognitive psychologists have overone hundred and fifty years of experience modeling it through experimentation.We ought to translate the methods and rigour of cognitive psychology to thestudy of artificial black boxes in the service of explainability. We provide areview of XAI for psychologists, arguing that current methods possess a blindspot that can be complemented by the experimental cognitive tradition. Wealso provide a framework for research in XAI, highlight exemplary cases ofexperimentation within XAI inspired by psychological science, and provide atutorial on experimenting with machines. We end by noting the advantages ofan experimental approach and invite other psychologists to conduct research inthis exciting new field.


Sign in / Sign up

Export Citation Format

Share Document