scholarly journals Making medical AI trustworthy: Researchers are trying to crack open the black box of AI so it can be deployed in health care - [News]

IEEE Spectrum ◽  
2018 ◽  
Vol 55 (8) ◽  
pp. 8-9
Author(s):  
Eliza Strickland
Keyword(s):  
2018 ◽  
Vol 118 (1) ◽  
pp. 12-13
Author(s):  
Jacob Molyneux
Keyword(s):  

2017 ◽  
Vol 40 (4) ◽  
pp. 497-513 ◽  
Author(s):  
Nicolas Bencherki ◽  
Alaric Bourgoin

Property is pervasive, and yet we organization scholars rarely discuss it. When we do, we think of it as a black-boxed concept to explain other phenomena, rather than studying it in its own right. This may be because organization scholars tend to limit their understanding of property to its legal definition, and emphasize control and exclusion as its defining criteria. This essay wishes to crack open the black box of property and explore the many ways in which possessive relations are established. They are achieved through work, take place as we make sense of signs, are invoked into existence in our speech acts, and travel along sociomaterial networks. Through a fictionalized account of a photographic exhibition, we show that property overflows its usual legal-economic definition. Building on the case of the photographic exhibit, we show that recognizing the diversity of property changes our rapport with organization studies as a field, by unifying its approaches to the individual-vs.-collective dilemma. We conclude by noting that if theories can make a difference, then whoever controls the assignment of property – including academics who ascribe properties to their objects of study – decides not only who has or who owns what, but also who or what that person or thing can be.


Author(s):  
Abraham Rudnick

Artificial intelligence (AI) and its correlates, such as machine and deep learning, are changing health care, where complex matters such as comoribidity call for dynamic decision-making. Yet, some people argue for extreme caution, referring to AI and its correlates as a black box. This brief article uses philosophy and science to address the black box argument about knowledge as a myth, concluding that this argument is misleading as it ignores a fundamental tenet of science, i.e., that no empirical knowledge is certain, and that scientific facts – as well as methods – often change. Instead, control of the technology of AI and its correlates has to be addressed to mitigate such unexpected negative consequences.


2018 ◽  
Vol 1 (1) ◽  
pp. 181-205 ◽  
Author(s):  
Pierre Baldi

Since the 1980s, deep learning and biomedical data have been coevolving and feeding each other. The breadth, complexity, and rapidly expanding size of biomedical data have stimulated the development of novel deep learning methods, and application of these methods to biomedical data have led to scientific discoveries and practical solutions. This overview provides technical and historical pointers to the field, and surveys current applications of deep learning to biomedical data organized around five subareas, roughly of increasing spatial scale: chemoinformatics, proteomics, genomics and transcriptomics, biomedical imaging, and health care. The black box problem of deep learning methods is also briefly discussed.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4392
Author(s):  
Belisario Panay ◽  
Nelson Baloian ◽  
José A. Pino ◽  
Sergio Peñafiel ◽  
Horacio Sanson ◽  
...  

Although many authors have highlighted the importance of predicting people’s health costs to improve healthcare budget management, most of them do not address the frequent need to know the reasons behind this prediction, i.e., knowing the factors that influence this prediction. This knowledge allows avoiding arbitrariness or people’s discrimination. However, many times the black box methods (that is, those that do not allow this analysis, e.g., methods based on deep learning techniques) are more accurate than those that allow an interpretation of the results. For this reason, in this work, we intend to develop a method that can achieve similar returns as those obtained with black box methods for the problem of predicting health costs, but at the same time it allows the interpretation of the results. This interpretable regression method is based on the Dempster-Shafer theory using Evidential Regression (EVREG) and a discount function based on the contribution of each dimension. The method “learns” the optimal weights for each feature using a gradient descent technique. The method also uses the nearest k-neighbor algorithm to accelerate calculations. It is possible to select the most relevant features for predicting a patient’s health care costs using this approach and the transparency of the Evidential Regression model. We can obtain a reason for a prediction with a k-NN approach. We used the Japanese health records at Tsuyama Chuo Hospital to test our method, which included medical examinations, test results, and billing information from 2013 to 2018. We compared our model to methods based on an Artificial Neural Network, Gradient Boosting, Regression Tree and Weighted k-Nearest Neighbors. Our results showed that our transparent model performed like the Artificial Neural Network and Gradient Boosting with an R2 of 0.44.


2014 ◽  
Vol 20 (2) ◽  
pp. 123-133 ◽  
Author(s):  
Sei-Hill Kim ◽  
Andrea H. Tanner ◽  
Caroline B. Foster ◽  
Soo Yun Kim

2010 ◽  
Vol 6 (1) ◽  
pp. 78-84 ◽  
Author(s):  
Saurabh Nagar ◽  
Sandhya Mehta ◽  
Vinod Bhatara ◽  
Rajender Aparasu

Sign in / Sign up

Export Citation Format

Share Document