Machine learning techniques for computer-based decision systems in the operating theatre: application to analgesia delivery

2020 ◽  
Author(s):  
Jose M Gonzalez-Cava ◽  
Rafael Arnay ◽  
Juan Albino Mendez-Perez ◽  
Ana León ◽  
María Martín ◽  
...  

Abstract This work focuses on the application of machine learning techniques to assist the clinicians in the administration of analgesic drug during general anaesthesia. Specifically, the main objective is to propose the basis of an intelligent system capable of making decisions to guide the opioid dose changes based on a new nociception monitor, the analgesia nociception index (ANI). Clinical data were obtained from 15 patients undergoing cholecystectomy surgery. By means of an off-line study, machine learning techniques were applied to analyse the possible relationship between the analgesic dose changes performed by the physician due to the hemodynamic activity of the patients and the evolution of the ANI. After training different classifiers and testing the results under cross validation, a preliminary relationship between the evolution of ANI and the dosage of remifentanil was found. These results evidence the potential of the ANI as a promising index to guide the infusion of analgesia.

2018 ◽  
Author(s):  
Michiel Stock ◽  
Tapio Pahikkala ◽  
Antti Airola ◽  
Willem Waegeman ◽  
Bernard De Baets

AbstractMotivationSupervised machine learning techniques have traditionally been very successful at reconstructing biological networks, such as protein-ligand interaction, protein-protein interaction and gene regulatory networks. Recently, much emphasis has been placed on the correct evaluation of such supervised models. It is vital to distinguish between using the model to either predict new interactions in a given network or to predict interactions for a new vertex not present in the original network. Specific cross-validation schemes need to be used to assess the performance in such different prediction settings.ResultsWe present a series of leave-one-out cross-validation shortcuts to rapidly estimate the performance of state-of-the-art kernel-based network inference techniques.AvailabilityThe machine learning techniques with the algebraic shortcuts are implemented in the RLScore software package.


2016 ◽  
Author(s):  
Michael Powell ◽  
Mahan Hosseini ◽  
John Collins ◽  
Chloe Callahan-Flintoft ◽  
William Jones ◽  
...  

ABSTRACTMachine learning is a powerful set of techniques that has enhanced the abilities of neuroscientists to interpret information collected through EEG, fMRI, and MEG data. With these powerful techniques comes the danger of overfitting of hyper-parameters which can render results invalid, and cause a failure to generalize beyond the data set. We refer to this problem as ‘over-hyping’ and show that it is pernicious despite commonly used precautions. In particular, over-hyping occurs when an analysis is run repeatedly with slightly different analysis parameters and one set of results is selected based on the analysis. When this is done, the resulting method is unlikely to generalize to a new dataset, rendering it a partially, or perhaps even completely spurious result that will not be valid outside of the data used in the original analysis. While it is commonly assumed that cross-validation is an effective protection against such spurious results generated through overfitting or overhyping, this is not actually true. In this article, we show that both one-shot and iterative optimization of an analysis are prone to over-hyping, despite the use of cross-validation. We demonstrate that non-generalizable results can be obtained even on non-informative (i.e. random) data by modifying hyper-parameters in seemingly innocuous ways. We recommend a number of techniques for limiting over-hyping, such as lock-boxes, blind analyses, pre-registrations, and nested cross-validation. These techniques, are common in other fields that use machine learning, including computer science and physics. Adopting similar safeguards is critical for ensuring the robustness of machine-learning techniques in the neurosciences.


2021 ◽  
Author(s):  
Andrew M V Dadario ◽  
Christian Espinoza ◽  
Wellington Araujo Nogueira

Objective Anticipating fetal risk is a major factor in reducing child and maternal mortality and suffering. In this context cardiotocography (CTG) is a low cost, well established procedure that has been around for decades, despite lacking consensus regarding its impact on outcomes. Machine learning emerged as an option for automatic classification of CTG records, as previous studies showed expert level results, but often came at the price of reduced generalization potential. With that in mind, the present study sought to improve statistical rigor of evaluation towards real world application. Materials and Methods In this study, a dataset of 2126 CTG recordings labeled as normal, suspect or pathological by the consensus of three expert obstetricians was used to create a baseline random forest model. This was followed by creating a lightgbm model tuned using gaussian process regression and post processed using cross validation ensembling. Performance was assessed using the area under the precision-recall curve (AUPRC) metric over 100 experiment executions, each using a testing set comprised of 30% of data stratified by the class label. Results The best model was a cross validation ensemble of lightgbm models that yielded 95.82% AUPRC. Conclusions The model is shown to produce consistent expert level performance at a less than negligible cost. At an estimated 0.78 USD per million predictions the model can generate value in settings with CTG qualified personnel and all the more in their absence.


2019 ◽  
Vol 19 (5) ◽  
pp. 1936-1942 ◽  
Author(s):  
Paul D. Rosero-Montalvo ◽  
Diego Hernn Peluffo-Ordonez ◽  
Vivian Felix Lopez Batista ◽  
Jorge Serrano ◽  
Edwin A. Rosero

Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5438 ◽  
Author(s):  
Valentín Barral ◽  
Carlos J. Escudero ◽  
José A. García-Naya ◽  
Pedro Suárez-Casal

Indoor positioning systems based on radio frequency inherently present multipath-related phenomena. This causes ranging systems such as ultra-wideband (UWB) to lose accuracy when detecting secondary propagation paths between two devices. If a positioning algorithm uses ranging measurements without considering these phenomena, it will face critical errors in estimating the position. This work analyzes the performance obtained in a localization system when combining location algorithms with machine learning techniques applied to a previous classification and mitigation of the propagation effects. For this purpose, real-world cross-scenarios are considered, where the data extracted from low-cost UWB devices for training the algorithms come from a scenario different from that considered for the test. The experimental results reveal that machine learning (ML) techniques are suitable for detecting non-line-of-sight (NLOS) ranging values in this situation.


2020 ◽  
Vol 8 (6) ◽  
pp. 2526-2530

Aircraft aviation system modules are considered for eco friendly oriented service estimation by global organizations. The emissions and aerodrome infrastructures effects the environment and citizen areas surrounding to aerodromes. An interest in researching to identify substantial environmental impact factors by authorities to support Eco-systems increased. In this paper Machine-Learning techniques applied over various training data sets related to aircraft aviation systems to generate interesting patterns related to environmental effects by aircrafts. Probabilistic prediction algorithms applied to support decision systems in generating guidelines to enhance the Eco-friendly architectures of aerodromes as well as aircrafts. The factors identification and territorial based environment precautions deviation observed for locating Eco-system regulation needed zones. The classifications performed in this paper over aircraft systems generate interesting measures to classify environmental scalable aircrafts in future with better eco-friendly technology. Rule miners identify the zones attributes associations among various countries. The work projected in this paper supports aircraft organizations to accurately estimate the environmental effect scores for aviation systems


Materials ◽  
2021 ◽  
Vol 14 (22) ◽  
pp. 7034
Author(s):  
Yue Xu ◽  
Waqas Ahmad ◽  
Ayaz Ahmad ◽  
Krzysztof Adam Ostrowski ◽  
Marta Dudek ◽  
...  

The current trend in modern research revolves around novel techniques that can predict the characteristics of materials without consuming time, effort, and experimental costs. The adaptation of machine learning techniques to compute the various properties of materials is gaining more attention. This study aims to use both standalone and ensemble machine learning techniques to forecast the 28-day compressive strength of high-performance concrete. One standalone technique (support vector regression (SVR)) and two ensemble techniques (AdaBoost and random forest) were applied for this purpose. To validate the performance of each technique, coefficient of determination (R2), statistical, and k-fold cross-validation checks were used. Additionally, the contribution of input parameters towards the prediction of results was determined by applying sensitivity analysis. It was proven that all the techniques employed showed improved performance in predicting the outcomes. The random forest model was the most accurate, with an R2 value of 0.93, compared to the support vector regression and AdaBoost models, with R2 values of 0.83 and 0.90, respectively. In addition, statistical and k-fold cross-validation checks validated the random forest model as the best performer based on lower error values. However, the prediction performance of the support vector regression and AdaBoost models was also within an acceptable range. This shows that novel machine learning techniques can be used to predict the mechanical properties of high-performance concrete.


Sign in / Sign up

Export Citation Format

Share Document