scholarly journals Deep Learning Models for Predicting Severe Progression in COVID-19-Infected Patients: Retrospective Study

10.2196/24973 ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. e24973
Author(s):  
Thao Thi Ho ◽  
Jongmin Park ◽  
Taewoo Kim ◽  
Byunggeon Park ◽  
Jaehee Lee ◽  
...  

Background Many COVID-19 patients rapidly progress to respiratory failure with a broad range of severities. Identification of high-risk cases is critical for early intervention. Objective The aim of this study is to develop deep learning models that can rapidly identify high-risk COVID-19 patients based on computed tomography (CT) images and clinical data. Methods We analyzed 297 COVID-19 patients from five hospitals in Daegu, South Korea. A mixed artificial convolutional neural network (ACNN) model, combining an artificial neural network for clinical data and a convolutional neural network for 3D CT imaging data, was developed to classify these cases as either high risk of severe progression (ie, event) or low risk (ie, event-free). Results Using the mixed ACNN model, we were able to obtain high classification performance using novel coronavirus pneumonia lesion images (ie, 93.9% accuracy, 80.8% sensitivity, 96.9% specificity, and 0.916 area under the curve [AUC] score) and lung segmentation images (ie, 94.3% accuracy, 74.7% sensitivity, 95.9% specificity, and 0.928 AUC score) for event versus event-free groups. Conclusions Our study successfully differentiated high-risk cases among COVID-19 patients using imaging and clinical features. The developed model can be used as a predictive tool for interventions in aggressive therapies.

2020 ◽  
Author(s):  
Sanghun Choi ◽  
Jae-Kwang Lim ◽  
Thao Thi Ho ◽  
Jongmin Park ◽  
Taewoo Kim ◽  
...  

BACKGROUND Many COVID-19 patients rapidly progress into respiratory failure with a broad range of severity. Identification of the high-risk cases is critical for early intervention. OBJECTIVE The aim of this study is to develop deep learning models that can rapidly diagnose high-risk COVID-19 patients based on computed tomography (CT) images and clinical data. METHODS We analyzed 297 COVID-19 patients from five hospitals in Daegu, South Korea. A mixed model (ACNN) including an artificial neural network for clinical data and a convolution-neural network for 3D CT imaging data is developed to classify high-risk cases with a severe progression (event) from low-risk COVID-19 patients (event-free). RESULTS By using the mixed ACNN model, we could obtain high classification performance using novel coronavirus pneumonia (NCP) lesion images (93.9% accuracy, 80.8% sensitivity, 96.9% specificity, and 0.916 AUC) and using lung segmentation images (94.3% accuracy, 74.7% sensitivity, 95.9% specificity, and 0.928 AUC) for event vs. event-free groups. CONCLUSIONS Our study has successfully differentiated high-risk cases among COVID-19 patients using the imaging and clinical features of COVID-19 patients. The developed model is potentially utilized as a prediction tool for intervening active therapy.


Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 210 ◽  
Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


Author(s):  
Himadri Mukherjee ◽  
Subhankar Ghosh ◽  
Ankita Dhar ◽  
Sk. Md. Obaidullah ◽  
KC Santosh ◽  
...  

<div><div><div><p>Among radiological imaging data, chest X-rays are of great use in observing COVID-19 mani- festations. For mass screening, using chest X-rays, a computationally efficient AI-driven tool is the must to detect COVID-19 positive cases from non-COVID ones. For this purpose, we proposed a light-weight Convolutional Neural Network (CNN)-tailored shallow architecture that can automatically detect COVID-19 positive cases using chest X-rays, with no false positive. The shallow CNN-tailored architecture was designed with fewer parameters as compared to other deep learning models, which was validated using 130 COVID-19 positive chest X-rays. In this study, in addition to COVID-19 positive cases, another set of non-COVID-19 cases (exactly similar to the size of COVID-19 set) was taken into account, where MERS, SARS, Pneumonia, and healthy chest X-rays were used. In experimental tests, to avoid possible bias, 5-fold cross validation was followed. Using 260 chest X-rays, the proposed model achieved an accuracy of an accuracy of 96.92%, sensitivity of 0.942, where AUC was 0.9869. Further, the reported false positive rate was 0 for 130 COVID-19 positive cases. This stated that proposed tool could possibly be used for mass screening. Note to be confused, it does not include any clinical implications. Using the exact same set of chest X-rays collection, the current results were better than other deep learning models and state-of-the-art works.</p></div></div></div>


Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject&rsquo;s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g. hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause for the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: 1) a long short-term memory (LSTM); 2) a proposed spectrogram-based convolutional neural network model (pCNN); and 3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (manual) feature engineering. Results were evaluated on our own, publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from "BCI Competition IV". Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


Author(s):  
Himadri Mukherjee ◽  
Subhankar Ghosh ◽  
Ankita Dhar ◽  
Sk. Md. Obaidullah ◽  
KC Santosh ◽  
...  

<div><div><div><p>Among radiological imaging data, chest X-rays are of great use in observing COVID-19 mani- festations. For mass screening, using chest X-rays, a computationally efficient AI-driven tool is the must to detect COVID-19 positive cases from non-COVID ones. For this purpose, we proposed a light-weight Convolutional Neural Network (CNN)-tailored shallow architecture that can automatically detect COVID-19 positive cases using chest X-rays, with no false positive. The shallow CNN-tailored architecture was designed with fewer parameters as compared to other deep learning models, which was validated using 130 COVID-19 positive chest X-rays. In this study, in addition to COVID-19 positive cases, another set of non-COVID-19 cases (exactly similar to the size of COVID-19 set) was taken into account, where MERS, SARS, Pneumonia, and healthy chest X-rays were used. In experimental tests, to avoid possible bias, 5-fold cross validation was followed. Using 260 chest X-rays, the proposed model achieved an accuracy of an accuracy of 96.92%, sensitivity of 0.942, where AUC was 0.9869. Further, the reported false positive rate was 0 for 130 COVID-19 positive cases. This stated that proposed tool could possibly be used for mass screening. Note to be confused, it does not include any clinical implications. Using the exact same set of chest X-rays collection, the current results were better than other deep learning models and state-of-the-art works.</p></div></div></div>


2021 ◽  
Author(s):  
Naveen Kumari ◽  
Rekha Bhatia

Abstract Facial emotion recognition extracts the human emotions from the images and videos. As such, it requires an algorithm to understand and model the relationships between faces and facial expressions, and to recognize human emotions. Recently, deep learning models are extensively utilized enhance the facial emotion recognition rate. However, the deep learning models suffer from the overfitting issue. Moreover, deep learning models perform poorly for images which have poor visibility and noise. Therefore, in this paper, a novel deep learning based facial emotion recognition tool is proposed. Initially, a joint trilateral filter is applied to the obtained dataset to remove the noise. Thereafter, contrast-limited adaptive histogram equalization (CLAHE) is applied to the filtered images to improve the visibility of images. Finally, a deep convolutional neural network is trained. Nadam optimizer is also utilized to optimize the cost function of deep convolutional neural networks. Experiments are achieved by using the benchmark dataset and competitive human emotion recognition models. Comparative analysis demonstrates that the proposed facial emotion recognition model performs considerably better compared to the competitive models.


2019 ◽  
Vol 9 (13) ◽  
pp. 2758 ◽  
Author(s):  
Mujtaba Husnain ◽  
Malik Muhammad Saad Missen ◽  
Shahzad Mumtaz ◽  
Muhammad Zeeshan Jhanidr ◽  
Mickaël Coustaty ◽  
...  

In the area of pattern recognition and pattern matching, the methods based on deep learning models have recently attracted several researchers by achieving magnificent performance. In this paper, we propose the use of the convolutional neural network to recognize the multifont offline Urdu handwritten characters in an unconstrained environment. We also propose a novel dataset of Urdu handwritten characters since there is no publicly-available dataset of this kind. A series of experiments are performed on our proposed dataset. The accuracy achieved for character recognition is among the best while comparing with the ones reported in the literature for the same task.


2019 ◽  
Vol 11 (23) ◽  
pp. 2788 ◽  
Author(s):  
Uwe Knauer ◽  
Cornelius Styp von Rekowski ◽  
Marianne Stecklina ◽  
Tilman Krokotsch ◽  
Tuan Pham Minh ◽  
...  

In this paper, we evaluate different popular voting strategies for fusion of classifier results. A convolutional neural network (CNN) and different variants of random forest (RF) classifiers were trained to discriminate between 15 tree species based on airborne hyperspectral imaging data. The spectral data was preprocessed with a multi-class linear discriminant analysis (MCLDA) as a means to reduce dimensionality and to obtain spatial–spectral features. The best individual classifier was a CNN with a classification accuracy of 0.73 +/− 0.086. The classification performance increased to an accuracy of 0.78 +/− 0.053 by using precision weighted voting for a hybrid ensemble of the CNN and two RF classifiers. This voting strategy clearly outperformed majority voting (0.74), accuracy weighted voting (0.75), and presidential voting (0.75).


2019 ◽  
Vol 2019 ◽  
pp. 1-7 ◽  
Author(s):  
Okeke Stephen ◽  
Mangal Sain ◽  
Uchenna Joseph Maduh ◽  
Do-Un Jeong

This study proposes a convolutional neural network model trained from scratch to classify and detect the presence of pneumonia from a collection of chest X-ray image samples. Unlike other methods that rely solely on transfer learning approaches or traditional handcrafted techniques to achieve a remarkable classification performance, we constructed a convolutional neural network model from scratch to extract features from a given chest X-ray image and classify it to determine if a person is infected with pneumonia. This model could help mitigate the reliability and interpretability challenges often faced when dealing with medical imagery. Unlike other deep learning classification tasks with sufficient image repository, it is difficult to obtain a large amount of pneumonia dataset for this classification task; therefore, we deployed several data augmentation algorithms to improve the validation and classification accuracy of the CNN model and achieved remarkable validation accuracy.


2020 ◽  
Vol 17 (8) ◽  
pp. 3478-3483
Author(s):  
V. Sravan Chowdary ◽  
G. Penchala Sai Teja ◽  
D. Mounesh ◽  
G. Manideep ◽  
C. T. Manimegalai

Road injuries are a big drawback in society for a few time currently. Ignoring sign boards while moving on roads has significantly become a major cause for road accidents. Thus we came up with an approach to face this issue by detecting the sign board and recognition of sign board. At this moment there are several deep learning models for object detection using totally different algorithms like RCNN, faster RCNN, SPP-net, etc. We prefer to use Yolo-3, which improves the speed and precision of object detection. This algorithm will increase the accuracy by utilizing residual units, skip connections and up-sampling. This algorithm uses a framework named Dark-net. This framework is intended specifically to create the neural network for training the Yolo algorithm. To thoroughly detect the sign board, we used this algorithm.


Sign in / Sign up

Export Citation Format

Share Document