scholarly journals Interpretable Machine Learning

Queue ◽  
2021 ◽  
Vol 19 (6) ◽  
pp. 28-56
Author(s):  
Valerie Chen ◽  
Jeffrey Li ◽  
Joon Sik Kim ◽  
Gregory Plumb ◽  
Ameet Talwalkar

The emergence of machine learning as a society-changing technology in the past decade has triggered concerns about people's inability to understand the reasoning of increasingly complex models. The field of IML (interpretable machine learning) grew out of these concerns, with the goal of empowering various stakeholders to tackle use cases, such as building trust in models, performing model debugging, and generally informing real human decision-making.

2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


2021 ◽  
Vol 14 (10) ◽  
pp. 1797-1804
Author(s):  
Dimitrios Koutsoukos ◽  
Supun Nakandala ◽  
Konstantinos Karanasos ◽  
Karla Saur ◽  
Gustavo Alonso ◽  
...  

Deep Learning (DL) has created a growing demand for simpler ways to develop complex models and efficient ways to execute them. Thus, a significant effort has gone into frameworks like PyTorch or TensorFlow to support a variety of DL models and run efficiently and seamlessly over heterogeneous and distributed hardware. Since these frameworks will continue improving given the predominance of DL workloads, it is natural to ask what else can be done with them. This is not a trivial question since these frameworks are based on the efficient implementation of tensors, which are well adapted to DL but, in principle, to nothing else. In this paper we explore to what extent Tensor Computation Runtimes (TCRs) can support non-ML data processing applications, so that other use cases can take advantage of the investments made on TCRs. In particular, we are interested in graph processing and relational operators, two use cases very different from ML, in high demand, and complement quite well what TCRs can do today. Building on HUMMINGBIRD, a recent platform converting traditional machine learning algorithms to tensor computations, we explore how to map selected graph processing and relational operator algorithms into tensor computations. Our vision is supported by the results: our code often outperforms custom-built C++ and CUDA kernels, while massively reducing the development effort, taking advantage of the cross-platform compilation capabilities of TCRs.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Pooya Tabesh

Purpose While it is evident that the introduction of machine learning and the availability of big data have revolutionized various organizational operations and processes, existing academic and practitioner research within decision process literature has mostly ignored the nuances of these influences on human decision-making. Building on existing research in this area, this paper aims to define these concepts from a decision-making perspective and elaborates on the influences of these emerging technologies on human analytical and intuitive decision-making processes. Design/methodology/approach The authors first provide a holistic understanding of important drivers of digital transformation. The authors then conceptualize the impact that analytics tools built on artificial intelligence (AI) and big data have on intuitive and analytical human decision processes in organizations. Findings The authors discuss similarities and differences between machine learning and two human decision processes, namely, analysis and intuition. While it is difficult to jump to any conclusions about the future of machine learning, human decision-makers seem to continue to monopolize the majority of intuitive decision tasks, which will help them keep the upper hand (vis-à-vis machines), at least in the near future. Research limitations/implications The work contributes to research on rational (analytical) and intuitive processes of decision-making at the individual, group and organization levels by theorizing about the way these processes are influenced by advanced AI algorithms such as machine learning. Practical implications Decisions are building blocks of organizational success. Therefore, a better understanding of the way human decision processes can be impacted by advanced technologies will prepare managers to better use these technologies and make better decisions. By clarifying the boundaries/overlaps among concepts such as AI, machine learning and big data, the authors contribute to their successful adoption by business practitioners. Social implications The work suggests that human decision-makers will not be replaced by machines if they continue to invest in what they do best: critical thinking, intuitive analysis and creative problem-solving. Originality/value The work elaborates on important drivers of digital transformation from a decision-making perspective and discusses their practical implications for managers.


Author(s):  
Jernej Vičič ◽  
Aleksandar Tošić

Blockchain-based currencies or cryptocurrencies have become a global phenomenon known to most people as a disruptive technology, and a new investment vehicle. However, due to their decentralized nature, regulating these markets has presented regulators with difficulties in finding a balance between nurturing innovation, and protecting consumers. The growing concerns about illicit activity have forced regulators to seek new ways of detecting, analyzing, and ultimately policing public blockchain transactions. Extensive research on machine learning, and transaction graph analysis algorithms has been done to track suspicious behaviour. However, having a macro view of a public ledger is equally important before pursuing a more fine-grained analysis. Benford’s law, the law of first digit, has been extensively used as a tool to discover accountant frauds (many other use cases exist). The basic motivation that drove our research presented in this paper was to test he applicability of the well established method to a new domain, in this case the identification of anomalous behavior using Benford’s law conformity test to the cryptocurrency domain. The research focused on transaction values in all major cryptocurrencies. A suitable time-period was identified that was long enough to sport sufficiently large number of observations for Benford’s law conformity tests and was also situated long enough in the past so that the anomalies were identified and well documented. The results show that most of the cryptocurrencies that did not conform to Benford’s law had well documented anomalous incidents, the first digits of aggregated transaction values of all well known cryptocurrency projects were conforming to Benford’s law. Thus the proposed method is applicable to the new domain.


Machine learning in recent years has become an integral part of our day to day life and the ease of use has improved a lot in the past decade.There are various ways to make the model to work in smaller devices.A modest method to advance any machine learning algorithm to work in smaller devices is to provide the output of large complex models as input to smaller models which can be easily deployed into mobile phones .We provided a framework where the large models can even learn the domain knowledge which is integrated as first-order logic rules and explicitly includes that knowledge into the smaller model by simultaneously training of both the models.This can be achieved by transfer learning where the knowledge learned by one model can be used to teach the other model.Domain knowledge integration is the most critical part here and it can be done by using some of the constraint principles where the scope of the data is reduced based upon the constraints mentioned. One of the best representation of domain knowledge is logic rules where the knowledge is encoded as predicates.This framework provides a way to integrate human knowledge into deep neural networks that can be easily deployed into any devices.


2019 ◽  
Author(s):  
Ryan Kirkpatrick ◽  
Brandon Turner ◽  
Per B. Sederberg

The dynamics of decision-making have been widely studied over the past several decades through the lens of an overarching theory called sequential sampling theory (SST). Within SST, choices are represented as accumulators, each of which races toward a decision boundary by drawing stochastic samples of evidence through time. Although progress has been made in understanding how decisionsare made within the SST framework, considerable debate centers on whether the accumulators exhibit dependency during the evidence accumulation process; namely whether accumulators are independent, fully dependent, or partially dependent. To evaluate which type of dependency is the most plausible representation of human decision-making, we applied a novel twist on two classic perceptual tasks; namely, in addition to the classic paradigm (i.e., the unequal-evidence conditions), we used stimuli that provided different magnitudes of equal-evidence (i.e., the equal-evidence conditions). In equal-evidence conditions, response times systematically decreased with increases in the magnitude of evidence, whereas in unequal evidence conditions, response times systematically increased as the difference in evidence between the two alternatives decreased. We designed a spectrum of models that ranged from independent accumulation to fully dependent accumulation, while also examining the effects of within-trial and between-trial variability. We then fit the set of models to our two experiments and found that models instantiating the principles of partial dependency provided the best fit to the data. Our results further suggest that mechanisms inducing partial dependency, such as lateral inhibition, are beneficial for understanding complex decision-making dynamics, even when the task is relatively simple.


Science ◽  
2021 ◽  
Vol 372 (6547) ◽  
pp. 1209-1214
Author(s):  
Joshua C. Peterson ◽  
David D. Bourgin ◽  
Mayank Agrawal ◽  
Daniel Reichman ◽  
Thomas L. Griffiths

Predicting and understanding how people make decisions has been a long-standing goal in many fields, with quantitative models of human decision-making informing research in both the social sciences and engineering. We show how progress toward this goal can be accelerated by using large datasets to power machine-learning algorithms that are constrained to produce interpretable psychological theories. Conducting the largest experiment on risky choice to date and analyzing the results using gradient-based optimization of differentiable decision theories implemented through artificial neural networks, we were able to recapitulate historical discoveries, establish that there is room to improve on existing theories, and discover a new, more accurate model of human decision-making in a form that preserves the insights from centuries of research.


2021 ◽  
Author(s):  
Jeroen Minderman ◽  
A. Bradley Duthie ◽  
Isabel L. Jones ◽  
Laura Thomas-Walters ◽  
Adrian Bach ◽  
...  

Models have become indispensable tools in conservation science in the face of increasingly rapid loss of biodiversity through anthropogenic habitat loss and natural resource exploitation. In addition to their ecological components, accurately representing human decision-making processes in such models is vital to maximise their utility. This can be problematic as modelling complexity increases, making them challenging to communicate and parameterise. Games have a long history of being used as science communication tools, but are less widely used as data collection tools, particularly in videogame form. We propose a novel approach to (1) aid communication of complex social-ecological models, and (2) "gamesource" human decision-making data, by explicitly casting an existing modelling framework as an interactive videogame. We present players with a natural resource management game as a front-end to a social-ecological modelling framework (Generalised Management Strategy Evaluation, GMSE). Players' actions replace a model algorithm making management decisions about a population of wild animals, which graze on crops and can thus lower agricultural yield. A number of non-player agents (farmers) respond through modelled algorithms to the player's management, taking actions that may affect their crop yield as well as the animal population. Players are asked to set their own management goal (e.g. maintain the animal population at a certain level or improve yield) and make decisions accordingly. Trial players were also asked to provide any feedback on both gameplay and purpose. We demonstrate the utility of this approach by collecting and analysing game play data from a sample of trial plays, in which we systematically vary two model parameters, and allowing trial players to interact with the model through the game interface. As an illustration, we show how variations in land ownership and the number of farmers in the system affects decision-making patterns as well as population trajectories (extinction probabilities). We discuss the potential and limitations of this model-game approach in the light of trial player feedback received. In particular, we highlight how a common concern about the game framework (perceived lack of "realism" or relevance to a specific context) are actually criticisms of the underlying model, as opposed to the game itself. This further highlights both the parallels between games and models, as well as the utility of model-games to aid in communicating complex models. We conclude that videogames may be an effective tool for conservation and natural resource management, and that although they provide a promising means to collect data on human decision-making, it is vital to carefully consider both external validity and potential biases when doing so.


Sign in / Sign up

Export Citation Format

Share Document