scholarly journals Predictive Risk Modelling to Prevent Child Maltreatment and Other Adverse Outcomes for Service Users: Inside the ‘Black Box’ of Machine Learning

2015 ◽  
Vol 46 (4) ◽  
pp. 1044-1058 ◽  
Author(s):  
Philip Gillingham
2021 ◽  
Author(s):  
Junjie Shi ◽  
Jiang Bian ◽  
Jakob Richter ◽  
Kuan-Hsun Chen ◽  
Jörg Rahnenführer ◽  
...  

AbstractThe predictive performance of a machine learning model highly depends on the corresponding hyper-parameter setting. Hence, hyper-parameter tuning is often indispensable. Normally such tuning requires the dedicated machine learning model to be trained and evaluated on centralized data to obtain a performance estimate. However, in a distributed machine learning scenario, it is not always possible to collect all the data from all nodes due to privacy concerns or storage limitations. Moreover, if data has to be transferred through low bandwidth connections it reduces the time available for tuning. Model-Based Optimization (MBO) is one state-of-the-art method for tuning hyper-parameters but the application on distributed machine learning models or federated learning lacks research. This work proposes a framework $$\textit{MODES}$$ MODES that allows to deploy MBO on resource-constrained distributed embedded systems. Each node trains an individual model based on its local data. The goal is to optimize the combined prediction accuracy. The presented framework offers two optimization modes: (1) $$\textit{MODES}$$ MODES -B considers the whole ensemble as a single black box and optimizes the hyper-parameters of each individual model jointly, and (2) $$\textit{MODES}$$ MODES -I considers all models as clones of the same black box which allows it to efficiently parallelize the optimization in a distributed setting. We evaluate $$\textit{MODES}$$ MODES by conducting experiments on the optimization for the hyper-parameters of a random forest and a multi-layer perceptron. The experimental results demonstrate that, with an improvement in terms of mean accuracy ($$\textit{MODES}$$ MODES -B), run-time efficiency ($$\textit{MODES}$$ MODES -I), and statistical stability for both modes, $$\textit{MODES}$$ MODES outperforms the baseline, i.e., carry out tuning with MBO on each node individually with its local sub-data set.


Author(s):  
Charles-Henry Bertrand Van Ouytsel ◽  
Olivier Bronchain ◽  
Gaëtan Cassiers ◽  
François-Xavier Standaert

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Joël L. Lavanchy ◽  
Joel Zindel ◽  
Kadir Kirtac ◽  
Isabell Twick ◽  
Enes Hosgor ◽  
...  

AbstractSurgical skills are associated with clinical outcomes. To improve surgical skills and thereby reduce adverse outcomes, continuous surgical training and feedback is required. Currently, assessment of surgical skills is a manual and time-consuming process which is prone to subjective interpretation. This study aims to automate surgical skill assessment in laparoscopic cholecystectomy videos using machine learning algorithms. To address this, a three-stage machine learning method is proposed: first, a Convolutional Neural Network was trained to identify and localize surgical instruments. Second, motion features were extracted from the detected instrument localizations throughout time. Third, a linear regression model was trained based on the extracted motion features to predict surgical skills. This three-stage modeling approach achieved an accuracy of 87 ± 0.2% in distinguishing good versus poor surgical skill. While the technique cannot reliably quantify the degree of surgical skill yet it represents an important advance towards automation of surgical skill assessment.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 18
Author(s):  
Pantelis Linardatos ◽  
Vasilis Papastefanopoulos ◽  
Sotiris Kotsiantis

Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.


2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
Enrico Favaro ◽  
Roberta Lazzarin ◽  
Daniela Cremasco ◽  
Erika Pierobon ◽  
Marta Guizzo ◽  
...  

Abstract Background and Aims The modern development of the black box approach in clinical nephrology is inconceivable without a logical theory of renal function and a comprehension of anatomical architecture of the kidney, in health and disease: this is the undisputed contribution offered by Malpighi, Oliver and Trueta starting from the seventeenth century. The machine learning model for the prediction of acute kidney injury, progression of renal failure and tubulointerstitial nephritis is a good example of how different knowledge about kidney are an indispensable tool for the interpretation of model itself. Method Historical data were collected from literature, textbooks, encyclopedias, scientific periodicals and laboratory experimental data concerning these three authors. Results The Italian Marcello Malpighi (1628-1694), born in Crevalcore near Bologna, was Professor of anatomy at Bologna, Pisa and Messina. The historic description of the pulmonary capillaries was made in his second epistle to Borelli published in 1661 and intitled De pulmonibus, by means of the frog as “the microscope of nature” (Fig. 1). It is the first description of capillaries in any circulation. William Harvey in De motu cordis in 1628 (year of publication the same of date of birth of Italian anatomist!) could not see the capillary vessels. This thriumphant discovery will serve for the next reconnaissance of characteristic renal rete mirabile.in the corpuscle of Malpighi, lying within the capsule of Bowman. Jean Redman Oliver (1889-1976), a pathologist born and raised in Northern California, was able to bridge the gap between the nephron and collecting system through meticulous dissections, hand drawn illustrations and experiments which underpin our current understanding of renal anatomy and physiology. In the skillful lecture “When is the kidney not a kidney?” (1949) Oliver summarizes his far-sighted vision on renal physiology and disease in the following sentence: the Kidney in health, if you will, but the Nephrons in disease. Because, the “nephron” like the “kidney” is an abstraction that must be qualified in terms of its various parts, its cellular components and the molecular mechanisms involved in each discrete activity (Fig. 2). The Catalan surgeon Josep Trueta I Raspall (1897-1977) was born in the Poblenou neighborhood of Barcelona. His impact of pioneering and visionary contribution to the changes in renal circulation for the pathogenesis of acute kidney injury was pivotal for history of renal physiology. “The kidney has two potential circulatory circulations. Blood may pass either almost exclusively through one or other of two pathways, or to a varying degree through both”. (Studies of the Renal Circulation, published in 1947). Now this diversion of blood from cortex to the less resistant medullary circulation is known with the eponym Trueta shunt. Conclusion The black box approach to the kidney diseases should be considered by practitioners as a further tool to help to inform model update in many clinical setting. The number of machine learning clinical prediction models being published is rising, as new fields of application are being explored in medicine (Fig. 3). A challenge in the clinical nephrology is to explore the “kidney machine” during each therapeutic diagnostic procedure. Always, the intriguing relationship between the set of nephrological syndromes and kidney diseases cannot disregard the precious notions the specific organization of kidney microcirculation, fruit of many scientific contributions of the work by Malpighi, Oliver and Trueta (Fig. 3).


Diagnostics ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 40
Author(s):  
Meike Nauta ◽  
Ricky Walsh ◽  
Adam Dubowski ◽  
Christin Seifert

Machine learning models have been successfully applied for analysis of skin images. However, due to the black box nature of such deep learning models, it is difficult to understand their underlying reasoning. This prevents a human from validating whether the model is right for the right reasons. Spurious correlations and other biases in data can cause a model to base its predictions on such artefacts rather than on the true relevant information. These learned shortcuts can in turn cause incorrect performance estimates and can result in unexpected outcomes when the model is applied in clinical practice. This study presents a method to detect and quantify this shortcut learning in trained classifiers for skin cancer diagnosis, since it is known that dermoscopy images can contain artefacts. Specifically, we train a standard VGG16-based skin cancer classifier on the public ISIC dataset, for which colour calibration charts (elliptical, coloured patches) occur only in benign images and not in malignant ones. Our methodology artificially inserts those patches and uses inpainting to automatically remove patches from images to assess the changes in predictions. We find that our standard classifier partly bases its predictions of benign images on the presence of such a coloured patch. More importantly, by artificially inserting coloured patches into malignant images, we show that shortcut learning results in a significant increase in misdiagnoses, making the classifier unreliable when used in clinical practice. With our results, we, therefore, want to increase awareness of the risks of using black box machine learning models trained on potentially biased datasets. Finally, we present a model-agnostic method to neutralise shortcut learning by removing the bias in the training dataset by exchanging coloured patches with benign skin tissue using image inpainting and re-training the classifier on this de-biased dataset.


Sign in / Sign up

Export Citation Format

Share Document