scholarly journals A CNN-LSTM Architecture for Detection of Intracranial Hemorrhage on CT scans

Author(s):  
Nhan T. Nguyen ◽  
Dat Q. Tran ◽  
Nghia T. Nguyen ◽  
Ha Q. Nguyen

AbstractWe propose a novel method that combines a convolutional neural network (CNN) with a long short-term memory (LSTM) mechanism for accurate prediction of intracranial hemorrhage on computed tomography (CT) scans. The CNN plays the role of a slice-wise feature extractor while the LSTM is responsible for linking the features across slices. The whole architecture is trained end-to-end with input being an RGB-like image formed by stacking 3 different viewing windows of a single slice. We validate the method on the recent RSNA Intracranial Hemorrhage Detection challenge and on the CQ500 dataset. For the RSNA challenge, our best single model achieves a weighted log loss of 0.0522 on the leaderboard, which is comparable to the top 3% performances, almost all of which make use of ensemble learning. Importantly, our method generalizes very well: the model trained on the RSNA dataset significantly outperforms the 2D model, which does not take into account the relationship between slices, on CQ500. Our codes and models will be made public.

Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5611 ◽  
Author(s):  
Mihail Burduja ◽  
Radu Tudor Ionescu ◽  
Nicolae Verga

In this paper, we present our system for the RSNA Intracranial Hemorrhage Detection challenge, which is based on the RSNA 2019 Brain CT Hemorrhage dataset. The proposed system is based on a lightweight deep neural network architecture composed of a convolutional neural network (CNN) that takes as input individual CT slices, and a Long Short-Term Memory (LSTM) network that takes as input multiple feature embeddings provided by the CNN. For efficient processing, we consider various feature selection methods to produce a subset of useful CNN features for the LSTM. Furthermore, we reduce the CT slices by a factor of 2×, which enables us to train the model faster. Even if our model is designed to balance speed and accuracy, we report a weighted mean log loss of 0.04989 on the final test set, which places us in the top 30 ranking (2%) from a total of 1345 participants. While our computing infrastructure does not allow it, processing CT slices at their original scale is likely to improve performance. In order to enable others to reproduce our results, we provide our code as open source. After the challenge, we conducted a subjective intracranial hemorrhage detection assessment by radiologists, indicating that the performance of our deep model is on par with that of doctors specialized in reading CT scans. Another contribution of our work is to integrate Grad-CAM visualizations in our system, providing useful explanations for its predictions. We therefore consider our system as a viable option when a fast diagnosis or a second opinion on intracranial hemorrhage detection are needed.


2019 ◽  
Vol 30 (01) ◽  
pp. 1950027 ◽  
Author(s):  
Xiuhui Wang ◽  
Wei Qi Yan

Human gait recognition is one of the most promising biometric technologies, especially for unobtrusive video surveillance and human identification from a distance. Aiming at improving recognition rate, in this paper we study gait recognition using deep learning and propose a novel method based on convolutional Long Short-Term Memory (Conv-LSTM). First, we present a variation of Gait Energy Images, i.e. frame-by-frame GEI (ff-GEI), to expand the volume of available Gait Energy Images (GEI) data and relax the constraints of gait cycle segmentation required by existing gait recognition methods. Second, we demonstrate the effectiveness of ff-GEI by analyzing the cross-covariance of one person’s gait data. Then, making use of the temporality of our human gait, we design a novel gait recognition model using Conv-LSTM. Finally, the proposed method is evaluated extensively based on the CASIA Dataset B for cross-view gait recognition, furthermore the OU-ISIR Large Population Dataset is employed to verify its generalization ability. Our experimental results show that the proposed method outperforms other algorithms based on these two datasets. The results indicate that the proposed ff-GEI model using Conv-LSTM, coupled with the new gait representation, can effectively solve the problems related to cross-view gait recognition.


2018 ◽  
Vol 7 (3.27) ◽  
pp. 258 ◽  
Author(s):  
Yecheng Yao ◽  
Jungho Yi ◽  
Shengjun Zhai ◽  
Yuwen Lin ◽  
Taekseung Kim ◽  
...  

The decentralization of cryptocurrencies has greatly reduced the level of central control over them, impacting international relations and trade. Further, wide fluctuations in cryptocurrency price indicate an urgent need for an accurate way to forecast this price. This paper proposes a novel method to predict cryptocurrency price by considering various factors such as market cap, volume, circulating supply, and maximum supply based on deep learning techniques such as the recurrent neural network (RNN) and the long short-term memory (LSTM),which are effective learning models for training data, with the LSTM being better at recognizing longer-term associations. The proposed approach is implemented in Python and validated for benchmark datasets. The results verify the applicability of the proposed approach for the accurate prediction of cryptocurrency price.


2021 ◽  
Author(s):  
Kevin Dsouza ◽  
Alexandra Maslova ◽  
Ediem Al-Jibury ◽  
Matthias Merkenschlager ◽  
Vijay Bhargava ◽  
...  

Abstract Despite the availability of chromatin conformation capture experiments, discerning the relationship between the 1D genome and 3D conformation remains a challenge, which limits our understanding of their affect on gene expression and disease. We propose Hi-C-LSTM, a method that produces low-dimensional latent representations that summarize intra-chromosomal Hi-C contacts via a recurrent long short-term memory (LSTM) neural network model. We find that these representations contain all the information needed to recreate the original Hi-C matrix with high accuracy, outperforming existing methods. These representations enable the identification of a variety of conformation-defining genomic elements, including nuclear compartments and conformation-related transcription factors. They furthermore enable in-silico perturbation experiments that measure the influence of cis-regulatory elements on conformation.


2021 ◽  
Author(s):  
Pai-Feng Teng ◽  
John Nieber

<p>Flooding is one of the most financially devastating natural hazards in the world. Studying storage-discharge relations can have the potential to improve existing flood forecasting systems, which are based on rainfall-runoff models. This presentation will assess the non-linear relation between daily water storage (ΔS) and discharge (Q) simulated by physical-based hydrological models at the Rum River Watershed, a HUC8 watershed in Minnesota, between 1995-2015, by training Long Short-Term Memory (LSTM) networks and other machine learning (ML) algorithms. Currently, linear regression models do not adequately represent the relationship between the simulated total ΔS and total Q at the HUC-8 watershed (R<sup>2</sup> = 0.3667). Since ML algorithms have been used for predicting the outputs that represent arbitrary non-linear functions between predictors and predictands, they will be used for improving the accuracy of the non-linear relation of the storage-discharge dynamics. This research will mainly use LSTM networks, the time-series deep learning neural network that has already been used for predicting rainfall-runoff relations. The LSTM network will be trained to evaluate the storage-discharge relationship by comparing two sets of non-linear hydrological variables simulated by the semi-distributed Hydrological Simulated Program-Fortran (HSPF): the relationship between the simulated discharges and input hydrological variables at selected HUC-8 watersheds, including air temperatures, cloud covers, dew points, potential evapotranspiration, precipitations, solar radiations, wind speeds, and total water storage, and the dynamics between simulated discharge and input variables that do not include the total water storage. The result of this research will lay the foundation for assessing the accuracy of downscaled storage-discharge dynamics by applying similar methods to evaluate the storage-discharge dynamics at small-scaled, HUC-12 watersheds. Furthermore, its results have the potentials for us to evaluate whether downscaling of storage-discharge dynamics at the HUC-12 watershed can improve the accuracy of predicting discharge by comparing the result from the HUC-8 and the HUC-12 watersheds.</p>


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Rahele Kafieh ◽  
Roya Arian ◽  
Narges Saeedizadeh ◽  
Zahra Amini ◽  
Nasim Dadashi Serej ◽  
...  

COVID-19 has led to a pandemic, affecting almost all countries in a few months. In this work, we applied selected deep learning models including multilayer perceptron, random forest, and different versions of long short-term memory (LSTM), using three data sources to train the models, including COVID-19 occurrences, basic information like coded country names, and detailed information like population, and area of different countries. The main goal is to forecast the outbreak in nine countries (Iran, Germany, Italy, Japan, Korea, Switzerland, Spain, China, and the USA). The performances of the models are measured using four metrics, including mean average percentage error (MAPE), root mean square error (RMSE), normalized RMSE (NRMSE), and R 2 . The best performance was found for a modified version of LSTM, called M-LSTM (winner model), to forecast the future trajectory of the pandemic in the mentioned countries. For this purpose, we collected the data from January 22 till July 30, 2020, for training, and from 1 August 2020 to 31 August 2020, for the testing phase. Through experimental results, the winner model achieved reasonably accurate predictions (MAPE, RMSE, NRMSE, and R 2 are 0.509, 458.12, 0.001624, and 0.99997, respectively). Furthermore, we stopped the training of the model on some dates related to main country actions to investigate the effect of country actions on predictions by the model.


2021 ◽  
Author(s):  
Aram Ter-Sarkisov

Abstract We present a model that fuses instance segmentation, Long Short-Term Memory Network and Attention mechanism to predict COVID-19 and segment chest CT scans. The model works by extracting a sequence of Regions of Interest that contain class-relevant information, and applies two Long Short-Term Memory networks with attention to this sequence to extract class-relevant features. The model is trained in one shot: both segmentation and classification branches, using two different sets of data. We achieve a 95.74% COVID-19 sensitivity, 98.13% Common Pneumonia sensitivity, 99.27% Control sensitivity and 98.15% class-adjusted F1 score on the main dataset of 21191 chest CT scan slices, and also run a number of ablation studies in which we achieve 97.73% COVID-19 sensitivity and 98.41% F1 score. All source code and models are available on https://github.com/AlexTS1980/COVID-LSTM-Attention.


Sign in / Sign up

Export Citation Format

Share Document