scholarly journals Bayesian Logical Neural Networks for Human-Centered Applications in Medicine

Author(s):  
Juan G. Diaz Ochoa ◽  
Lukas Maier ◽  
Orsolya Csiszar

Medicine is characterized by its inherent ambiguity, i.e., the difficulty to identify and obtain exact outcomes from available data. Regarding this problem, electronic Health Records (EHRs) aim to avoid imprecisions in the data recording, for instance by its recording in an automatic way or by the integration of data that is both readable by humans and machines. However, the inherent biology and physiological processes introduce a constant epistemic uncertainty, which has a deep implication in the way the condition of the patients is estimated. For instance, for some patients, it is not possible to speak about an exact diagnosis, but about the suspicion of a disease, which reveals that the medical practice is often ambiguous. In this work, we report a novel modeling methodology combining explainable models, defined on Logic Neural Networks (LONNs), and Bayesian Networks (BN) that deliver ambiguous outcomes, for instance, medical procedures (Therapy Keys (TK)), depending on the uncertainty of observed data. If epistemic uncertainty is generated from the underlying physiology, the model delivers exact or ambiguous results depending on the individual parameters of each patient. Thus, our model does not aim to assist the customer by providing exact results but is a user-centered solution that informs the customer when a given recommendation, in this case, a therapy, is uncertain and must be carefully evaluated by the customer, implying that the final customer must be a professional who will not fully rely on automatic recommendations. This novel methodology has been tested on a database for patients with heart insufficiency.

2021 ◽  
Vol 99 (Supplement_2) ◽  
pp. 25-26
Author(s):  
Sterling H Fahey ◽  
Sarah West ◽  
John M Long ◽  
Carey Satterfield ◽  
Rodolfo C Cardoso

Abstract Gestational nutrient restriction causes epigenetic and phenotypic changes that affect multiple physiological processes in the offspring. Gonadotropes, the cells in the anterior pituitary that secrete luteinizing hormone (LH) and follicle-stimulating hormone (FSH), are particularly sensitive to nutritional changes during fetal development. Our objective herein was to investigate the effects of gestational nutrient restriction on LH protein content and number of gonadotropes in the fetal bovine pituitary. We hypothesized that moderate nutrient restriction during mid to late gestation decreases pituitary LH production, which is associated with a reduced number of gonadotropes. Embryos were produced in vitro with X-bearing semen from a single sire then split to generate monozygotic twins. Each identical twin was transferred to a virgin dam yielding four sets of female twins. At gestational d 158, the dams were randomly assigned into two groups, one fed 100% NRC requirements (control) and the other fed 70% of NRC requirements (restricted) during the last trimester of gestation, ensuring each pair of twins had one twin in each group. At gestational d 265, the fetuses (n = 4/group) were euthanized by barbiturate overdose, and the pituitaries were collected. Western blots were performed using an ovine LH-specific antibody (Dr. A.F. Parlow, NIDDK). The total LH protein content in the pituitary tended to be decreased in the restricted fetuses compared to controls (P < 0.10). However, immunohistochemistry analysis of the pituitary did not reveal any significant changes in the total number of LH-positive cells (control = 460±23 cells/0.5 mm2; restricted = 496±45 cells/0.5 mm2, P = 0.58). In conclusion, while maternal nutrient restriction during gestation resulted in a trend of reduced LH content in the fetal pituitary, immunohistological findings suggest that these changes are likely related to the individual potential of each gonadotrope to produce LH, rather than alterations in cell differentiation during fetal development.


1980 ◽  
Vol 1 (8) ◽  
pp. 3-6
Author(s):  
George J. Annas

In an extraordinary and highly controversial 5-4 decision, the United States Supreme Court decided on June 30, 1980, that the United States Constitution does not require either the federal government or the individual states to fund medically necessary abortions for poor women who qualify for Medicaid.At issue in this case is the constitutionality of the Hyde Amendment. The applicable 1980 version provides:|N]one of the funds provided by this joint resolution shall be used to perform abortions except where the life of the mother would be endangered if the fetus were carried to term; or except for such medical procedures necessary for the victims of rape or incest when such rape or incest has been reported promptly to a law enforcement agency or public health service, (emphasis supplied)


2004 ◽  
Vol 34 (1) ◽  
pp. 37-52
Author(s):  
Wiktor Jassem ◽  
Waldemar Grygiel

The mid-frequencies and bandwidths of formants 1–5 were measured at targets, at plus 0.01 s and at minus 0.01 s off the targets of vowels in a 100-word list read by five male and five female speakers, for a total of 3390 10-variable spectrum specifications. Each of the six Polish vowel phonemes was represented approximately the same number of times. The 3390* 10 original-data matrix was processed by probabilistic neural networks to produce a classification of the spectra with respect to (a) vowel phoneme, (b) identity of the speaker, and (c) speaker gender. For (a) and (b), networks with added input information from another independent variable were also used, as well as matrices of the numerical data appropriately normalized. Mean scores for classification with respect to phonemes in a multi-speaker design in the testing sets were around 95%, and mean speaker-dependent scores for the phonemes varied between 86% and 100%, with two speakers scoring 100% correct. The individual voices were identified between 95% and 96% of the time, and classifications of the spectra for speaker gender were practically 100% correct.


2020 ◽  
Vol 34 (04) ◽  
pp. 5620-5627 ◽  
Author(s):  
Murat Sensoy ◽  
Lance Kaplan ◽  
Federico Cerutti ◽  
Maryam Saleki

Deep neural networks are often ignorant about what they do not know and overconfident when they make uninformed predictions. Some recent approaches quantify classification uncertainty directly by training the model to output high uncertainty for the data samples close to class boundaries or from the outside of the training distribution. These approaches use an auxiliary data set during training to represent out-of-distribution samples. However, selection or creation of such an auxiliary data set is non-trivial, especially for high dimensional data such as images. In this work we develop a novel neural network model that is able to express both aleatoric and epistemic uncertainty to distinguish decision boundary and out-of-distribution regions of the feature space. To this end, variational autoencoders and generative adversarial networks are incorporated to automatically generate out-of-distribution exemplars for training. Through extensive analysis, we demonstrate that the proposed approach provides better estimates of uncertainty for in- and out-of-distribution samples, and adversarial examples on well-known data sets against state-of-the-art approaches including recent Bayesian approaches for neural networks and anomaly detection methods.


Author(s):  
EMILIO CORCHADO ◽  
COLIN FYFE

We consider the difficult problem of identification of independent causes from a mixture of them when these causes interfere with one another in a particular manner: those considered are visual inputs to a neural network system which are created by independent underlying causes which may occlude each other. The prototypical problem in this area is a mixture of horizontal and vertical bars in which each horizontal bar interferes with the representation of each vertical bar and vice versa. Previous researchers have developed artificial neural networks which can identify the individual causes; we seek to go further in that we create artificial neural networks which identify all the horizontal bars from only such a mixture. This task is a necessary precursor to the development of the concept of "horizontal" or "vertical".


Geophysics ◽  
2021 ◽  
pp. 1-45
Author(s):  
Runhai Feng ◽  
Dario Grana ◽  
Niels Balling

Segmentation of faults based on seismic images is an important step in reservoir characterization. With the recent developments of deep-learning methods and the availability of massive computing power, automatic interpretation of seismic faults has become possible. The likelihood of occurrence for a fault can be quantified using a sigmoid function. Our goal is to quantify the fault model uncertainty that is generally not captured by deep-learning tools. We propose to use the dropout approach, a regularization technique to prevent overfitting and co-adaptation in hidden units, to approximate the Bayesian inference and estimate the principled uncertainty over functions. Particularly, the variance of the learned model has been decomposed into aleatoric and epistemic parts. The proposed method is applied to a real dataset from the Netherlands F3 block with two different dropout ratios in convolutional neural networks. The aleatoric uncertainty is irreducible since it relates to the stochastic dependency within the input observations. As the number of Monte-Carlo realizations increases, the epistemic uncertainty asymptotically converges and the model standard deviation decreases, because the variability of model parameters is better simulated or explained with a larger sample size. This analysis can quantify the confidence to use fault predictions with less uncertainty. Additionally, the analysis suggests where more training data are needed to reduce the uncertainty in low confidence regions.


2018 ◽  
Vol 12 (04) ◽  
pp. 481-500 ◽  
Author(s):  
Naifan Zhuang ◽  
The Duc Kieu ◽  
Jun Ye ◽  
Kien A. Hua

With the growth of crowd phenomena in the real world, crowd scene understanding is becoming an important task in anomaly detection and public security. Visual ambiguities and occlusions, high density, low mobility, and scene semantics, however, make this problem a great challenge. In this paper, we propose an end-to-end deep architecture, convolutional nonlinear differential recurrent neural networks (CNDRNNs), for crowd scene understanding. CNDRNNs consist of GoogleNet Inception V3 convolutional neural networks (CNNs) and nonlinear differential recurrent neural networks (RNNs). Different from traditional non-end-to-end solutions which separate the steps of feature extraction and parameter learning, CNDRNN utilizes a unified deep model to optimize the parameters of CNN and RNN hand in hand. It thus has the potential of generating a more harmonious model. The proposed architecture takes sequential raw image data as input, and does not rely on tracklet or trajectory detection. It thus has clear advantages over the traditional flow-based and trajectory-based methods, especially in challenging crowd scenarios of high density and low mobility. Taking advantage of CNN and RNN, CNDRNN can effectively analyze the crowd semantics. Specifically, CNN is good at modeling the semantic crowd scene information. On the other hand, nonlinear differential RNN models the motion information. The individual and increasing orders of derivative of states (DoS) in differential RNN can progressively build up the ability of the long short-term memory (LSTM) gates to detect different levels of salient dynamical patterns in deeper stacked layers modeling higher orders of DoS. Lastly, existing LSTM-based crowd scene solutions explore deep temporal information and are claimed to be “deep in time.” Our proposed method CNDRNN, however, models the spatial and temporal information in a unified architecture and achieves “deep in space and time.” Extensive performance studies on the Violent-Flows, CUHK Crowd, and NUS-HGA datasets show that the proposed technique significantly outperforms state-of-the-art methods.


Author(s):  
Christina Corbane ◽  
Vasileios Syrris ◽  
Filip Sabo ◽  
Panagiotis Politis ◽  
Michele Melchiorri ◽  
...  

Abstract Spatially consistent and up-to-date maps of human settlements are crucial for addressing policies related to urbanization and sustainability, especially in the era of an increasingly urbanized world. The availability of open and free Sentinel-2 data of the Copernicus Earth Observation program offers a new opportunity for wall-to-wall mapping of human settlements at a global scale. This paper presents a deep-learning-based framework for a fully automated extraction of built-up areas at a spatial resolution of 10 m from a global composite of Sentinel-2 imagery. A multi-neuro modeling methodology building on a simple Convolution Neural Networks architecture for pixel-wise image classification of built-up areas is developed. The core features of the proposed model are the image patch of size 5 × 5 pixels adequate for describing built-up areas from Sentinel-2 imagery and the lightweight topology with a total number of 1,448,578 trainable parameters and 4 2D convolutional layers and 2 flattened layers. The deployment of the model on the global Sentinel-2 image composite provides the most detailed and complete map reporting about built-up areas for reference year 2018. The validation of the results with an independent reference dataset of building footprints covering 277 sites across the world establishes the reliability of the built-up layer produced by the proposed framework and the model robustness. The results of this study contribute to cutting-edge research in the field of automated built-up areas mapping from remote sensing data and establish a new reference layer for the analysis of the spatial distribution of human settlements across the rural–urban continuum.


Sign in / Sign up

Export Citation Format

Share Document