scholarly journals Interpreting Clinical Latent Representations Using Autoencoders and Probabilistic Models

Author(s):  
David Chushig-Muzo ◽  
Cristina Soguero-Ruiz ◽  
Pablo de Miguel-Bohoyo ◽  
Inmaculada Mora-Jiménez

Perpetrating fraud for financial gain is a known phenomenon, in this fast-growing adoption of smart phones and increased internet penetration, embracing digital technology. Evolution of financial transactions over the years, from paper currency to electronic media, leading the way in the form of credit cards or interbank electronic transactions. Consumers trending towards e-commerce hasn't deterred criminals, but considered this as the opportunity to make money through defrauding methods. Criminals are rapidly improving their fraud abilities. The current Supervised and Unsupervised Machine Learning Algorithm approaches to the discovery of fraud are their inability to learn and explore all possible information representation. The proposed system, VAE based fraud detection, which uses a variational autoencoder for predicting and detecting of fraud detection. The VAE based fraud detection model consists of three major layers, an encoder, a decoder and a fraud detector element. The VAE-based fraud detection model is capable of learning latent variable probabilistic models by optimizing the average value of the information observed. The fraud detector uses the latent representations obtained from the variational autoencoder to classify whether transactions are fraud or not. The model is applied on real time credit card fraud dataset. The experimental results show that, implemented model perform better than supervised Logistic Regression, unsupervised Autoencoders or Random Forest ensemble model.


2020 ◽  
Vol 34 (04) ◽  
pp. 6404-6412
Author(s):  
Mike Wu ◽  
Kristy Choi ◽  
Noah Goodman ◽  
Stefano Ermon

Despite the recent success in probabilistic modeling and their applications, generative models trained using traditional inference techniques struggle to adapt to new distributions, even when the target distribution may be closely related to the ones seen during training. In this work, we present a doubly-amortized variational inference procedure as a way to address this challenge. By sharing computation across not only a set of query inputs, but also a set of different, related probabilistic models, we learn transferable latent representations that generalize across several related distributions. In particular, given a set of distributions over images, we find the learned representations to transfer to different data transformations. We empirically demonstrate the effectiveness of our method by introducing the MetaVAE, and show that it significantly outperforms baselines on downstream image classification tasks on MNIST (10-50%) and NORB (10-35%).


2020 ◽  
Vol 17 (6) ◽  
pp. 76-91
Author(s):  
E. D. Solozhentsev

The scientific problem of economics “Managing the quality of human life” is formulated on the basis of artificial intelligence, algebra of logic and logical-probabilistic calculus. Managing the quality of human life is represented by managing the processes of his treatment, training and decision making. Events in these processes and the corresponding logical variables relate to the behavior of a person, other persons and infrastructure. The processes of the quality of human life are modeled, analyzed and managed with the participation of the person himself. Scenarios and structural, logical and probabilistic models of managing the quality of human life are given. Special software for quality management is described. The relationship of human quality of life and the digital economy is examined. We consider the role of public opinion in the management of the “bottom” based on the synthesis of many studies on the management of the economics and the state. The bottom management is also feedback from the top management.


2016 ◽  
Author(s):  
Stewart M. Edie ◽  
◽  
Peter D. Smits ◽  
David Jablonski

2016 ◽  
Vol 51 (1) ◽  
pp. 469-484 ◽  
Author(s):  
Damien Octeau ◽  
Somesh Jha ◽  
Matthew Dering ◽  
Patrick McDaniel ◽  
Alexandre Bartel ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Doan Cong Le ◽  
Krisana Chinnasarn ◽  
Jirapa Chansangrat ◽  
Nattawut Keeratibharat ◽  
Paramate Horkaew

AbstractSegmenting a liver and its peripherals from abdominal computed tomography is a crucial step toward computer aided diagnosis and therapeutic intervention. Despite the recent advances in computing methods, faithfully segmenting the liver has remained a challenging task, due to indefinite boundary, intensity inhomogeneity, and anatomical variations across subjects. In this paper, a semi-automatic segmentation method based on multivariable normal distribution of liver tissues and graph-cut sub-division is presented. Although it is not fully automated, the method minimally involves human interactions. Specifically, it consists of three main stages. Firstly, a subject specific probabilistic model was built from an interior patch, surrounding a seed point specified by the user. Secondly, an iterative assignment of pixel labels was applied to gradually update the probabilistic map of the tissues based on spatio-contextual information. Finally, the graph-cut model was optimized to extract the 3D liver from the image. During post-processing, overly segmented nodal regions due to fuzzy tissue separation were removed, maintaining its correct anatomy by using robust bottleneck detection with adjacent contour constraint. The proposed system was implemented and validated on the MICCAI SLIVER07 dataset. The experimental results were benchmarked against the state-of-the-art methods, based on major clinically relevant metrics. Both visual and numerical assessments reported herein indicated that the proposed system could improve the accuracy and reliability of asymptomatic liver segmentation.


2018 ◽  
Vol 53 (4) ◽  
pp. 436-449 ◽  
Author(s):  
Woosuk Lee ◽  
Kihong Heo ◽  
Rajeev Alur ◽  
Mayur Naik

2020 ◽  
Vol 8 (1) ◽  
pp. 45-69
Author(s):  
Eckhard Liebscher ◽  
Wolf-Dieter Richter

AbstractWe prove and describe in great detail a general method for constructing a wide range of multivariate probability density functions. We introduce probabilistic models for a large variety of clouds of multivariate data points. In the present paper, the focus is on star-shaped distributions of an arbitrary dimension, where in case of spherical distributions dependence is modeled by a non-Gaussian density generating function.


Sign in / Sign up

Export Citation Format

Share Document