scholarly journals An Implementation of Patient-Specific Biventricular Mechanics Simulations With a Deep Learning and Computational Pipeline

2021 ◽  
Vol 12 ◽  
Author(s):  
Renee Miller ◽  
Eric Kerfoot ◽  
Charlène Mauger ◽  
Tevfik F. Ismail ◽  
Alistair A. Young ◽  
...  

Parameterised patient-specific models of the heart enable quantitative analysis of cardiac function as well as estimation of regional stress and intrinsic tissue stiffness. However, the development of personalised models and subsequent simulations have often required lengthy manual setup, from image labelling through to generating the finite element model and assigning boundary conditions. Recently, rapid patient-specific finite element modelling has been made possible through the use of machine learning techniques. In this paper, utilising multiple neural networks for image labelling and detection of valve landmarks, together with streamlined data integration, a pipeline for generating patient-specific biventricular models is applied to clinically-acquired data from a diverse cohort of individuals, including hypertrophic and dilated cardiomyopathy patients and healthy volunteers. Valve motion from tracked landmarks as well as cavity volumes measured from labelled images are used to drive realistic motion and estimate passive tissue stiffness values. The neural networks are shown to accurately label cardiac regions and features for these diverse morphologies. Furthermore, differences in global intrinsic parameters, such as tissue anisotropy and normalised active tension, between groups illustrate respective underlying changes in tissue composition and/or structure as a result of pathology. This study shows the successful application of a generic pipeline for biventricular modelling, incorporating artificial intelligence solutions, within a diverse cohort.

Author(s):  
Divya Choudhary ◽  
Siripong Malasri

This paper implements and compares machine learning algorithms to predict the amount of coolant required during transportation of temperature sensitive products. The machine learning models use trip duration, product threshold temperature and ambient temperature as the independent variables to predict the weight of gel packs need to keep the temperature of the product below its threshold temperature value. The weight of the gel packs can be translated to number of gel packs required. Regression using Neural Networks, Support Vector Regression, Gradient Boosted Regression and Elastic Net Regression are compared. The Neural Networks based model performs the best in terms of its mean absolute error value and r-squared values. A Neural Network model is then deployed on as webservice to score allowing for client application to make rest calls to estimate gel pack weights


2019 ◽  
Vol 36 (9) ◽  
pp. 1889-1902
Author(s):  
Magnus Hieronymus ◽  
Jenny Hieronymus ◽  
Fredrik Hieronymus

Long sea level records with high temporal resolution are of paramount importance for future coastal protection and adaptation plans. Here we discuss the application of machine learning techniques to some regression problems commonly encountered when analyzing such time series. The performance of artificial neural networks is compared with that of multiple linear regression models on sea level data from the Swedish coast. The neural networks are found to be superior when local sea level forcing is used together with remote sea level forcing and meteorological forcing, whereas the linear models and the neural networks show similar performance when local sea level forcing is excluded. The overall performance of the machine learning algorithms is good, often surpassing that of the much more computationally costly numerical ocean models used at our institute.


Author(s):  
Pei-Yuan Peng ◽  
M. T. Yang

Neural networks based diagnostics is studied in this paper for the fault/alert assessment of the mistuned bladed disk. By randomly varying the blades frequencies (± 2% of the nominal frequencies), we experimentally collected 1000 sets of data. Each set of data is composed of blade frequencies, maximal vibrating magnitude for each blade with the corresponding frequency. The goal is to investigate the mapping relationship from the blade property (blade frequency) to maximal vibrating displacement and the corresponding frequency. Assuming the system is forced by a traveling excitation, the vibration/frequency signature is derived. The challenging part stems from mistuned bladed disk system, where every blade performs differently for the same input excitement. The neural networks based diagnostics system demonstrate its capability to identify the mapping relation for this complex system. The simulation is based on a finite element model of bladed disk system and the results exhibit the effectiveness of the proposed algorithm.


Author(s):  
Regis Riveret ◽  
Son Tran ◽  
Artur d'Avila Garcez

Neural-symbolic systems combine the strengths of neural networks and symbolic formalisms. In this paper, we introduce a neural-symbolic system which combines restricted Boltzmann machines and probabilistic semi-abstract argumentation. We propose to train networks on argument labellings explaining the data, so that any sampled data outcome is associated with an argument labelling. Argument labellings are integrated as constraints within restricted Boltzmann machines, so that the neural networks are used to learn probabilistic dependencies amongst argument labels. Given a dataset and an argumentation graph as prior knowledge, for every example/case K in the dataset, we use a so-called K-maxconsistent labelling of the graph, and an explanation of case K refers to a K-maxconsistent labelling of the given argumentation graph. The abilities of the proposed system to predict correct labellings were evaluated and compared with standard machine learning techniques. Experiments revealed that such argumentation Boltzmann machines can outperform other classification models, especially in noisy settings.


Author(s):  
Jinfang Zeng ◽  
Youming Li ◽  
Yu Zhang ◽  
Da Chen

Environmental sound classification (ESC) is a challenging problem due to the complexity of sounds. To date, a variety of signal processing and machine learning techniques have been applied to ESC task, including matrix factorization, dictionary learning, wavelet filterbanks and deep neural networks. It is observed that features extracted from deeper networks tend to achieve higher performance than those extracted from shallow networks. However, in ESC task, only the deep convolutional neural networks (CNNs) which contain several layers are used and the residual networks are ignored, which lead to degradation in the performance. Meanwhile, a possible explanation for the limited exploration of CNNs and the difficulty to improve on simpler models is the relative scarcity of labeled data for ESC. In this paper, a residual network called EnvResNet for the ESC task is proposed. In addition, we propose to use audio data augmentation to overcome the problem of data scarcity. The experiments will be performed on the ESC-50 database. Combined with data augmentation, the proposed model outperforms baseline implementations relying on mel-frequency cepstral coefficients and achieves results comparable to other state-of-the-art approaches in terms of classification accuracy.


2021 ◽  
Author(s):  
Rogini Runghen ◽  
Daniel B Stouffer ◽  
Giulio Valentino Dalla Riva

Collecting network interaction data is difficult. Non-exhaustive sampling and complex hidden processes often result in an incomplete data set. Thus, identifying potentially present but unobserved interactions is crucial both in understanding the structure of large scale data, and in predicting how previously unseen elements will interact. Recent studies in network analysis have shown that accounting for metadata (such as node attributes) can improve both our understanding of how nodes interact with one another, and the accuracy of link prediction. However, the dimension of the object we need to learn to predict interactions in a network grows quickly with the number of nodes. Therefore, it becomes computationally and conceptually challenging for large networks. Here, we present a new predictive procedure combining a graph embedding method with machine learning techniques to predict interactions on the base of nodes' metadata. Graph embedding methods project the nodes of a network onto a---low dimensional---latent feature space. The position of the nodes in the latent feature space can then be used to predict interactions between nodes. Learning a mapping of the nodes' metadata to their position in a latent feature space corresponds to a classic---and low dimensional---machine learning problem. In our current study we used the Random Dot Product Graph model to estimate the embedding of an observed network, and we tested different neural networks architectures to predict the position of nodes in the latent feature space. Flexible machine learning techniques to map the nodes onto their latent positions allow to account for multivariate and possibly complex nodes' metadata. To illustrate the utility of the proposed procedure, we apply it to a large dataset of tourist visits to destinations across New Zealand. We found that our procedure accurately predicts interactions for both existing nodes and nodes newly added to the network, while being computationally feasible even for very large networks. Overall, our study highlights that by exploiting the properties of a well understood statistical model for complex networks and combining it with standard machine learning techniques, we can simplify the link prediction problem when incorporating multivariate node metadata. Our procedure can be immediately applied to different types of networks, and to a wide variety of data from different systems. As such, both from a network science and data science perspective, our work offers a flexible and generalisable procedure for link prediction.


2020 ◽  
Author(s):  
Georgios Kantidakis ◽  
Hein Putter ◽  
Carlo Lancia ◽  
Jacob de Boer ◽  
Andries E Braat ◽  
...  

Abstract Background: Predicting survival of recipients after liver transplantation is regarded as one of the most important challenges in contemporary medicine. Hence, improving on current prediction models is of great interest.Nowadays, there is a strong discussion in the medical field about machine learning (ML) and whether it has greater potential than traditional regression models when dealing with complex data. Criticism to ML is related to unsuitable performance measures and lack of interpretability which is important for clinicians.Methods: In this paper, ML techniques such as random forests and neural networks are applied to large data of 62294 patients from the United States with 97 predictors selected on clinical/statistical grounds, over more than 600, to predict survival from transplantation. Of particular interest is also the identification of potential risk factors. A comparison is performed between 3 different Cox models (with all variables, backward selection and LASSO) and 3 machine learning techniques: a random survival forest and 2 partial logistic artificial neural networks (PLANNs). For PLANNs, novel extensions to their original specification are tested. Emphasis is given on the advantages and pitfalls of each method and on the interpretability of the ML techniques.Results: Well-established predictive measures are employed from the survival field (C-index, Brier score and Integrated Brier Score) and the strongest prognostic factors are identified for each model. Clinical endpoint is overall graft-survival defined as the time between transplantation and the date of graft-failure or death. The random survival forest shows slightly better predictive performance than Cox models based on the C-index. Neural networks show better performance than both Cox models and random survival forest based on the Integrated Brier Score at 10 years.Conclusion: In this work, it is shown that machine learning techniques can be a useful tool for both prediction and interpretation in the survival context. From the ML techniques examined here, PLANN with 1 hidden layer predicts survival probabilities the most accurately, being as calibrated as the Cox model with all variables.


Sign in / Sign up

Export Citation Format

Share Document