scholarly journals Bayesian Inference: An Introduction to Principles and Practice in Machine Learning

Author(s):  
Michael E. Tipping
2021 ◽  
Author(s):  
Rammohan Shukla ◽  
Nicholas D Henkel ◽  
Marissa A Smail ◽  
Xiaojun Wu ◽  
Heather A Enright ◽  
...  

We probed a transcriptomic dataset of pilocarpine-induced TLE using various ontological, machine-learning, and systems-biology approaches. We showed that, underneath the complex and penetrant changes, moderate-to-subtle upregulated homeostatic and downregulated synaptic changes associated with the dentate gyrus and hippocampal subfields could not only predict TLE but various other forms of epilepsy. At the cellular level, pyramidal neurons and interneurons showed disparate changes, whereas the proportion of non-neuronal cells increased steadily. A probabilistic Bayesian network demonstrated an aberrant and oscillating physiological interaction between oligodendrocytes and interneurons in driving seizures. Validating the Bayesian inference, we showed that the cell types driving the seizures were associated with known antiepileptic and epileptic drugs. These findings provide predictive biomarkers of epilepsy, insights into the cellular connections and causal changes associated with TLE, and a drug discovery method focusing on these events.


Proceedings ◽  
2019 ◽  
Vol 33 (1) ◽  
pp. 16
Author(s):  
Ali Mohammad-Djafari

Signale and image processing has always been the main tools in many area and in particular in Medical and Biomedical applications. Nowadays, there are great number of toolboxes, general purpose and very specialized, in which classical techniques are implemented and can be used: all the transformation based methods (Fourier, Wavelets, ...) as well as model based and iterative regularization methods. Statistical methods have also shown their success in some area when parametric models are available. Bayesian inference based methods had great success, in particular, when the data are noisy, uncertain, incomplete (missing values) or with outliers and where there is a need to quantify uncertainties. In some applications, nowadays, we have more and more data. To use these “Big Data” to extract more knowledge, the Machine Learning and Artificial Intelligence tools have shown success and became mandatory. However, even if in many domains of Machine Learning such as classification and clustering these methods have shown success, their use in real scientific problems are limited. The main reasons are twofold: First, the users of these tools cannot explain the reasons when the are successful and when they are not. The second is that, in general, these tools can not quantify the remaining uncertainties. Model based and Bayesian inference approach have been very successful in linear inverse problems. However, adjusting the hyper parameters is complex and the cost of the computation is high. The Convolutional Neural Networks (CNN) and Deep Learning (DL) tools can be useful for pushing farther these limits. At the other side, the Model based methods can be helpful for the selection of the structure of CNN and DL which are crucial in ML success. In this work, I first provide an overview and then a survey of the aforementioned methods and explore the possible interactions between them.


Information ◽  
2019 ◽  
Vol 10 (8) ◽  
pp. 261 ◽  
Author(s):  
Lu

An important problem in machine learning is that, when using more than two labels, it is very difficult to construct and optimize a group of learning functions that are still useful when the prior distribution of instances is changed. To resolve this problem, semantic information G theory, Logical Bayesian Inference (LBI), and a group of Channel Matching (CM) algorithms are combined to form a systematic solution. A semantic channel in G theory consists of a group of truth functions or membership functions. In comparison with the likelihood functions, Bayesian posteriors, and Logistic functions that are typically used in popular methods, membership functions are more convenient to use, providing learning functions that do not suffer the above problem. In Logical Bayesian Inference (LBI), every label is independently learned. For multilabel learning, we can directly obtain a group of optimized membership functions from a large enough sample with labels, without preparing different samples for different labels. Furthermore, a group of Channel Matching (CM) algorithms are developed for machine learning. For the Maximum Mutual Information (MMI) classification of three classes with Gaussian distributions in a two-dimensional feature space,only 2–3 iterations are required for the mutual information between three classes and three labels to surpass 99% of the MMI for most initial partitions For mixture models, the Expectation-Maximization (EM) algorithm is improved to form the CM-EM algorithm, which can outperform the EM algorithm when the mixture ratios are imbalanced, or when local convergence exists. The CM iteration algorithm needs to combine with neural networks for MMI classification in high-dimensional feature spaces. LBI needs further investigation for the unification of statistics and logic.


Author(s):  
Mayank Pandey ◽  

Machine Learning is a branch of AI (Artificial Intelligence) which expands on the idea of a computational system extending its knowledge about set methodical behaviors from the data that is fed to it to essentially develop analytical skills that can help in identifying patterns and making decisions with little to no participation of a real human being. Computer algorithms help in gaining experience to improve the facility over time for use by both consumers and corporations. In today’s technologically advanced world, Machine Learning has given us self-driving cars, speech recognition software, and AI agents like Siri and Google assistant. This project evaluates how the Beta function came to be and how Stirling’s formula is implemented in calculating the magnitude of this function for large input values. The Beta function can then be used to produce a Beta distribution of probabilities to find whether people will actually watch a video they come across on their recommendations feed or search feed and then using Bayesian inference update the prior set predictions.


2019 ◽  
Author(s):  
Harshad M Paranjape ◽  
Kenneth I. Aycock ◽  
Craig Bonsignore ◽  
Jason D. Weaver ◽  
Brent A. Craven ◽  
...  

We implement an approach using Bayesian inference and machine learning to calibrate the material parameters of a constitutive model for the superelastic deformation of NiTi shape memory alloy. We use a diamond-shaped specimen geometry that is suited to calibrate both tensile and compressive material parameters from a single test. We adopt the Bayesian inference calibration scheme to take full-field surface strain measurements obtained using digital image correlation together with global load data as an input for calibration. The calibration is performed by comparing these two experimental quantities of interest with the corresponding results from a simulation library built with the superelastic forward finite element model. We present a machine learning based approach to enrich the simulation library using a surrogate model. This improves the calibration accuracy to the extent permitted by the accuracy of the underlying material model and also improves the computational efficiency. We demonstrate, verify, and partially validate the calibration results through various examples. We also demonstrate how the uncertainty in the calibrated superelastic material parameters can propagate to a subsequent simulation of fatigue loading. This approach is versatile and can be used to calibrate other models of superelastic deformation from data obtained using various modalities. This probabilistic calibration approach can become an integral part of a framework to assess and communicate the credibility of simulations performed in the design of superelastic NiTi articles such as medical devices. The knowledge obtained from this calibration approach is most effective when the limitations of the underlying model and the suitability of the training data used to calibrate the model are understood and communicated.


2020 ◽  
Author(s):  
Christopher D'Ambrosia ◽  
Henrik Christensen ◽  
Eliah Aronoff-Spencer

BACKGROUND Assigning meaningful probabilities of SARS-CoV-2 infection risk presents a diagnostic challenge across the continuum of care. OBJECTIVE The aim of this study was to develop and clinically validate an adaptable, personalized diagnostic model to assist clinicians in ruling in and ruling out COVID-19 in potential patients. We compared the diagnostic performance of probabilistic, graphical, and machine learning models against a previously published benchmark model. METHODS We integrated patient symptoms and test data using machine learning and Bayesian inference to quantify individual patient risk of SARS-CoV-2 infection. We trained models with 100,000 simulated patient profiles based on 13 symptoms and estimated local prevalence, imaging, and molecular diagnostic performance from published reports. We tested these models with consecutive patients who presented with a COVID-19–compatible illness at the University of California San Diego Medical Center over the course of 14 days starting in March 2020. RESULTS We included 55 consecutive patients with fever (n=43, 78%) or cough (n=42, 77%) presenting for ambulatory (n=11, 20%) or hospital care (n=44, 80%). In total, 51% (n=28) were female and 49% (n=27) were aged <60 years. Common comorbidities included diabetes (n=12, 22%), hypertension (n=15, 27%), cancer (n=9, 16%), and cardiovascular disease (n=7, 13%). Of these, 69% (n=38) were confirmed via reverse transcription-polymerase chain reaction (RT-PCR) to be positive for SARS-CoV-2 infection, and 20% (n=11) had repeated negative nucleic acid testing and an alternate diagnosis. Bayesian inference network, distance metric learning, and ensemble models discriminated between patients with SARS-CoV-2 infection and alternate diagnoses with sensitivities of 81.6%-84.2%, specificities of 58.8%-70.6%, and accuracies of 61.4%-71.8%. After integrating imaging and laboratory test statistics with the predictions of the Bayesian inference network, changes in diagnostic uncertainty at each step in the simulated clinical evaluation process were highly sensitive to location, symptom, and diagnostic test choices. CONCLUSIONS Decision support models that incorporate symptoms and available test results can help providers diagnose SARS-CoV-2 infection in real-world settings.


2021 ◽  
Author(s):  
Alexander Kanonirov ◽  
Ksenia Balabaeva ◽  
Sergey Kovalchuk

The relevance of this study lies in improvement of machine learning models understanding. We present a method for interpreting clustering results and apply it to the case of clinical pathways modeling. This method is based on statistical inference and allows to get the description of the clusters, determining the influence of a particular feature on the difference between them. Based on the proposed approach, it is possible to determine the characteristic features for each cluster. Finally, we compare the method with the Bayesian inference explanation and with the interpretation of medical experts [1].


2019 ◽  
Vol 28 (01) ◽  
pp. 055-055

Albers DJ, Levine ME, Stuart A, Mamykina L, Gluckman B, Hripcsak G. Mechanistic machine learning: how data assimilation leverages physiological knowledge using bayesian inference to forecast the future, infer the present, and phenotype. J Am Med Inform Assoc 2018;25(10):1392-401 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6188514/ Oktay O, Ferrante E, Kamnitsas K, Heinrich M, Bai W, Caballero J, Cook SA, de Marvao A, Dawes T, O'Regan DP, Kainz B, Glocker B, Rueckert D. Anatomically Constrained Neural Networks (ACNNs): application to cardiac image enhancement and segmentation. IEEE Trans Med Imaging 2018;37(2):384-95 https://spiral.imperial.ac.uk:8443/handle/10044/1/50440 Lee J, Sun J, Wang F, Wang S, Jun CH, Jiang X. Privacy-preserving patient similarity learning in a federated environment: development and analysis. JMIR Med Inform 2018;6(2):e20 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5924379/


2021 ◽  
Vol 57 (15) ◽  
pp. 1855-1870
Author(s):  
Luke Gundry ◽  
Si-Xuan Guo ◽  
Gareth Kennedy ◽  
Jonathan Keith ◽  
Martin Robinson ◽  
...  

Advanced data analysis tools such as mathematical optimisation, Bayesian inference and machine learning have the capability to revolutionise the field of quantitative voltammetry.


10.2196/24478 ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. e24478
Author(s):  
Christopher D'Ambrosia ◽  
Henrik Christensen ◽  
Eliah Aronoff-Spencer

Background Assigning meaningful probabilities of SARS-CoV-2 infection risk presents a diagnostic challenge across the continuum of care. Objective The aim of this study was to develop and clinically validate an adaptable, personalized diagnostic model to assist clinicians in ruling in and ruling out COVID-19 in potential patients. We compared the diagnostic performance of probabilistic, graphical, and machine learning models against a previously published benchmark model. Methods We integrated patient symptoms and test data using machine learning and Bayesian inference to quantify individual patient risk of SARS-CoV-2 infection. We trained models with 100,000 simulated patient profiles based on 13 symptoms and estimated local prevalence, imaging, and molecular diagnostic performance from published reports. We tested these models with consecutive patients who presented with a COVID-19–compatible illness at the University of California San Diego Medical Center over the course of 14 days starting in March 2020. Results We included 55 consecutive patients with fever (n=43, 78%) or cough (n=42, 77%) presenting for ambulatory (n=11, 20%) or hospital care (n=44, 80%). In total, 51% (n=28) were female and 49% (n=27) were aged <60 years. Common comorbidities included diabetes (n=12, 22%), hypertension (n=15, 27%), cancer (n=9, 16%), and cardiovascular disease (n=7, 13%). Of these, 69% (n=38) were confirmed via reverse transcription-polymerase chain reaction (RT-PCR) to be positive for SARS-CoV-2 infection, and 20% (n=11) had repeated negative nucleic acid testing and an alternate diagnosis. Bayesian inference network, distance metric learning, and ensemble models discriminated between patients with SARS-CoV-2 infection and alternate diagnoses with sensitivities of 81.6%-84.2%, specificities of 58.8%-70.6%, and accuracies of 61.4%-71.8%. After integrating imaging and laboratory test statistics with the predictions of the Bayesian inference network, changes in diagnostic uncertainty at each step in the simulated clinical evaluation process were highly sensitive to location, symptom, and diagnostic test choices. Conclusions Decision support models that incorporate symptoms and available test results can help providers diagnose SARS-CoV-2 infection in real-world settings.


Sign in / Sign up

Export Citation Format

Share Document