Understanding Machine Learning for Diversified Portfolio Construction by Explainable AI

2020 ◽  
Author(s):  
Markus Jaeger ◽  
Stephan Krügel ◽  
Dimitri Marinelli ◽  
Jochen Papenbrock ◽  
Peter Schwendner
2020 ◽  
Author(s):  
Markus Jaeger ◽  
Stephan Krügel ◽  
Dimitri Marinelli ◽  
Jochen Papenbrock ◽  
Peter Schwendner

2021 ◽  
pp. jfds.2021.1.066
Author(s):  
Markus Jaeger ◽  
Stephan Krügel ◽  
Dimitri Marinelli ◽  
Jochen Papenbrock ◽  
Peter Schwendner

2021 ◽  
Author(s):  
Jean-Jacques Ohana ◽  
Steve Ohana ◽  
Eric Benhamou ◽  
David Saltiel ◽  
Beatrice Guez

Author(s):  
Gaël Aglin ◽  
Siegfried Nijssen ◽  
Pierre Schaus

Decision Trees (DTs) are widely used Machine Learning (ML) models with a broad range of applications. The interest in these models has increased even further in the context of Explainable AI (XAI), as decision trees of limited depth are very interpretable models. However, traditional algorithms for learning DTs are heuristic in nature; they may produce trees that are of suboptimal quality under depth constraints. We introduce PyDL8.5, a Python library to infer depth-constrained Optimal Decision Trees (ODTs). PyDL8.5 provides an interface for DL8.5, an efficient algorithm for inferring depth-constrained ODTs. The library provides an easy-to-use scikit-learn compatible interface. It cannot only be used for classification tasks, but also for regression, clustering, and other tasks. We introduce an interface that allows users to easily implement these other learning tasks. We provide a number of examples of how to use this library.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Katharina Weitz

Abstract Human-Centered AI is a widely requested goal for AI applications. To reach this is explainable AI promises to help humans to understand the inner workings and decisions of AI systems. While different XAI techniques have been developed to shed light on AI systems, it is still unclear how end-users with no experience in machine learning perceive these. Psychological concepts like trust, mental models, and self-efficacy can serve as instruments to evaluate XAI approaches in empirical studies with end-users. First results in applications for education, healthcare, and industry suggest that one XAI does not fit all. Instead, the design of XAI has to consider user needs, personal background, and the specific task of the AI system.


Author(s):  
Mehmet Fatih Bayramoglu ◽  
Cagatay Basarir

Investing in developed markets offers investors the opportunity to diversify internationally by investing in foreign firms. In other words, it provides the possibility of reducing systematic risk. For this reason, investors are very interested in developed markets. However, developed are more efficient than emerging markets, so the risk and return can be low in these markets. For this reason, developed market investors often use machine learning techniques to increase their gains while reducing their risks. In this chapter, artificial neural networks which is one of the machine learning techniques have been tested to improve internationally diversified portfolio performance. Also, the results of ANNs were compared with the performances of traditional portfolios and the benchmark portfolio. The portfolios are derived from the data of 16 foreign companies quoted on NYSE by ANNs, and they are invested for 30 trading days. According to the results, portfolio derived by ANNs gained 10.30% return, while traditional portfolios gained 5.98% return.


Significance It required arguably the single largest computational effort for a machine learning model to date, and is it capable of producing text at times indistinguishable from the work of a human author. This has generated considerable excitement about potentially transformative business applications -- and concerns about the system's weaknesses and possible misuse. Impacts Stereotypes and biases in machine learning models will become increasingly problematic as they are adopted by businesses and governments. The use of flawed AI tools that result in embarrassing failures risk cuts to public funding for AI research. Academia and industry face pressure to advance research into explainable AI, but progress is slow.


2021 ◽  
pp. 323-335
Author(s):  
James Hinns ◽  
Xiuyi Fan ◽  
Siyuan Liu ◽  
Veera Raghava Reddy Kovvuri ◽  
Mehmet Orcun Yalcin ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document