scholarly journals NIMG-08. PREDICTION OF LOWER-GRADE GLIOMA MOLECULAR SUBTYPES USING DEEP LEARNING

2020 ◽  
Vol 22 (Supplement_2) ◽  
pp. ii148-ii148
Author(s):  
Yoshihiro Muragaki ◽  
Yutaka Matsui ◽  
Takashi Maruyama ◽  
Masayuki Nitta ◽  
Taiichi Saito ◽  
...  

Abstract INTRODUCTION It is useful to know the molecular subtype of lower-grade gliomas (LGG) when deciding on a treatment strategy. This study aims to diagnose this preoperatively. METHODS A deep learning model was developed to predict the 3-group molecular subtype using multimodal data including magnetic resonance imaging (MRI), positron emission tomography (PET), and computed tomography (CT). The performance was evaluated using leave-one-out cross validation with a dataset containing information from 217 LGG patients. RESULTS The model performed best when the dataset contained MRI, PET, and CT data. The model could predict the molecular subtype with an accuracy of 96.6% for the training dataset and 68.7% for the test dataset. The model achieved test accuracies of 58.5%, 60.4%, and 59.4% when the dataset contained only MRI, MRI and PET, and MRI and CT data, respectively. The conventional method used to predict mutations in the isocitrate dehydrogenase (IDH) gene and the codeletion of chromosome arms 1p and 19q (1p/19q) sequentially had an overall accuracy of 65.9%. This is 2.8 percent point lower than the proposed method, which predicts the 3-group molecular subtype directly. CONCLUSIONS AND FUTURE PERSPECTIVE A deep learning model was developed to diagnose the molecular subtype preoperatively based on multi-modality data in order to predict the 3-group classification directly. Cross-validation showed that the proposed model had an overall accuracy of 68.7% for the test dataset. This is the first model to double the expected value for a 3-group classification problem, when predicting the LGG molecular subtype. We plan to apply the techniques of heat map and/or segmentation for an increase in prediction accuracy.

Author(s):  
Adán Mora-Fallas ◽  
Hervé Goëau ◽  
Susan Mazer ◽  
Natalie Love ◽  
Erick Mata-Montero ◽  
...  

Millions of herbarium records provide an invaluable legacy and knowledge of the spatial and temporal distributions of plants over centuries across all continents (Soltis et al. 2018). Due to recent efforts to digitize and to make publicly accessible most major natural collections, investigations of ecological and evolutionary patterns at unprecedented geographic scales are now possible (Carranza-Rojas et al. 2017, Lorieul et al. 2019). Nevertheless, biologists are now facing the problem of extracting from a huge number of herbarium sheets basic information such as textual descriptions, the numbers of organs, and measurements of various morphological traits. Deep learning technologies can dramatically accelerate the extraction of such basic information by automating the routines of organ identification, counts and measurements, thereby allowing biologists to spend more time on investigations such as phenological or geographic distribution studies. Recent progress on instance segmentation demonstrated by the Mask-RCNN method is very promising in the context of herbarium sheets, in particular for detecting with high precision different organs of interest on each specimen, including leaves, flowers, and fruits. However, like any deep learning approach, this method requires a significant number of labeled examples with fairly detailed outlines of individual organs. Creating such a training dataset can be very time-consuming and may be discouraging for researchers. We propose in this work to integrate the Mask-RCNN approach within a global system enabling an active learning mechanism (Sener and Savarese 2018) in order to minimize the number of outlines of organs that researchers must manually annotate. The principle is to alternate cycles of manual annotations and training updates of the deep learning model and predictions on the entire collection to process. Then, the challenge of the active learning mechanism is to estimate automatically at each cycle which are the most useful objects that must be manually extracted in the next manual annotation cycle in order to learn, in as few cycles as possible, an accurate model. We discuss experiments addressing the effectiveness, the limits and the time required of our approach for annotation, in the context of a phenological study of more than 10,000 reproductive organs (buds, flowers, fruits and immature fruits) of Streptanthus tortuosus, a species known to be highly variable in appearance and therefore very difficult to be processed by an instance segmentation deep learning model.


2021 ◽  
Vol 11 (12) ◽  
pp. 3199-3208
Author(s):  
K. Ganapriya ◽  
N. Uma Maheswari ◽  
R. Venkatesh

Prediction of occurrence of a seizure would be of greater help to make necessary precaution for taking care of the patient. A Deep learning model, recurrent neural network (RNN), is designed for predicting the upcoming values in the EEG values. A deep data analysis is made to find the parameter that could best differentiate the normal values and seizure values. Next a recurrent neural network model is built for predicting the values earlier. Four different variants of recurrent neural networks are designed in terms of number of time stamps and the number of LSTM layers and the best model is identified. The best identified RNN model is used for predicting the values. The performance of the model is evaluated in terms of explained variance score and R2 score. The model founds to perform well number of elements in the test dataset is minimal and so this model can predict the seizure values only a few seconds earlier.


2021 ◽  
Vol 10 (12) ◽  
pp. 2681
Author(s):  
Yuna Kim ◽  
Hyun-Il Kim ◽  
Geun-Seok Park ◽  
Seo-Young Kim ◽  
Sang-Il Choi ◽  
...  

Computer-assisted analysis is expected to improve the reliability of videofluoroscopic swallowing studies (VFSSs), but its usefulness is limited. Previously, we proposed a deep learning model that can detect laryngeal penetration or aspiration fully automatically in VFSS video images, but the evidence for its reliability was insufficient. This study aims to compare the intra- and inter-rater reliability of the computer model and human raters. The test dataset consisted of 173 video files from which the existence of laryngeal penetration or aspiration was judged by the computer and three physicians in two sessions separated by a one-month interval. Intra- and inter-rater reliability were calculated using Cohen’s kappa coefficient, the positive reliability ratio (PRR) and the negative reliability ratio (NRR). Intrarater reliability was almost perfect for the computer and two experienced physicians. Interrater reliability was moderate to substantial between the model and each human rater and between the human raters. The average PRR and NRR between the model and the human raters were similar to those between the human raters. The results demonstrate that the deep learning model can detect laryngeal penetration or aspiration from VFSS video as reliably as human examiners.


2020 ◽  
Vol 10 (1) ◽  
pp. 421 ◽  
Author(s):  
Kwang Sun Ryu ◽  
Sang Won Lee ◽  
Erdenebileg Batbaatar ◽  
Jae Wook Lee ◽  
Kui Son Choi ◽  
...  

A screening model for undiagnosed diabetes mellitus (DM) is important for early medical care. Insufficient research has been carried out developing a screening model for undiagnosed DM using machine learning techniques. Thus, the primary objective of this study was to develop a screening model for patients with undiagnosed DM using a deep neural network. We conducted a cross-sectional study using data from the Korean National Health and Nutrition Examination Survey (KNHANES) 2013–2016. A total of 11,456 participants were selected, excluding those with diagnosed DM, an age < 20 years, or missing data. KNHANES 2013–2015 was used as a training dataset and analyzed to develop a deep learning model (DLM) for undiagnosed DM. The DLM was evaluated with 4444 participants who were surveyed in the 2016 KNHANES. The DLM was constructed using seven non-invasive variables (NIV): age, waist circumference, body mass index, gender, smoking status, hypertension, and family history of diabetes. The model showed an appropriate performance (area under curve (AUC): 80.11) compared with existing previous screening models. The DLM developed in this study for patients with undiagnosed diabetes could contribute to early medical care.


2021 ◽  
Author(s):  
Shinae Lee ◽  
Sang-il Oh ◽  
Junik Jo ◽  
Sumi Kang ◽  
Yooseok Shin ◽  
...  

Abstract The early detection of incipient dental caries enables preventive treatment, and bitewing radiography is a good diagnostic tool for posterior incipient caries. In the field of medical imaging, the utilization of deep learning with convolutional neural networks (CNNs) to process various types of images has been actively researched and has shown promising performance. In this study, we developed a CNN model using a U-shaped deep CNN (U-Net) for dental caries detection on bitewing radiographs and investigated whether this model can improve clinicians’ performance. In total, 304 bitewing radiographs were used to train the deep learning model and 50 radiographs were used for performance evaluation. The diagnostic performance of the CNN model on the total test dataset was as follows: precision, 63.29%; recall, 65.02%; and F1-score, 64.14%, showing quite accurate performance. When three dentists detected dental caries using the results of the CNN model as reference data, the overall diagnostic performance of all three clinicians significantly improved, as shown by an increased recall ratio (D1, 85.34%; D1', 92.15%; D2, 85.86%; D2', 93.72%; D3, 69.11%; D3', 79.06%). These increases were especially significant in the incipient and moderate caries subgroups. The deep learning model may help clinicians to diagnose dental caries more accurately.


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Hiroto Ozaki ◽  
Takeshi Aoyagi

AbstractConsiderable attention has been given to deep-learning and machine-learning techniques in an effort to reduce the computational cost of computational fluid dynamics simulation. The present paper addresses the prediction of steady flows passing many fixed cylinders using a deep-learning model and investigates the accuracy of the predicted velocity field. The deep-learning model outputs the x- and y-components of the flow velocity field when the cylinder arrangement is input. The accuracy of the predicted velocity field is investigated, focusing on the velocity profile of the fluid flow and the fluid force acting on the cylinders. The present model accurately predicts the flow when the number of cylinders is equal to or close to that set in the training dataset. The extrapolation of the prediction to a smaller number of cylinders results in error, which can be interpreted as internal friction of the fluid. The results of the fluid force acting on the cylinders suggest that the present deep-learning model has good generalization performance for systems with a larger number of cylinders.


2021 ◽  
Vol 310 ◽  
pp. 04002
Author(s):  
Nguyen Thanh Doan

Nowaday, expanding the application of deep learning technology is attracting attention of many researchers in the field of remote sensing. This paper presents methodology of using deep convolutional neural network model to determine the position of shoreline on Sentinel 2 satellite image. The methodology also provides techniques to reduce model retraining while ensuring the accuracy of the results. Methodological evaluation and analysis were conducted in the Mekong Delta region. The results from the study showed that interpolating the input images and calibrating the result thresholds improve accuracy and allow the trained deep learning model to externally test different images. The paper also evaluates the impact of the training dataset on the quality of the results obtained. Suggestions are also given for the number of files in the training dataset, as well as the information used for model training to solve the shoreline detection problem.


2021 ◽  
Vol 1 (1) ◽  
pp. 44-46
Author(s):  
Ashar Mirza ◽  
Rishav Kumar Rajak

In this paper, we present a UNet architecture-based deep learning method that is used to segment polyp and instruments from the image data set provided in the MedAI Challenge2021. For the polyp segmentation task, we developed a UNet based algorithm for segmenting polyps in images taken from endoscopies. The main focus of this task is to achieve high segmentation metrics on the supplied test dataset. Similarly for the polyp segmentation task, in the instrument segmentation task, we have developed UNet based algorithms for segmenting instruments present in colonoscopy videos.


2020 ◽  
Author(s):  
Aaron E. Kornblith ◽  
Newton Addo ◽  
Ruolei Dong ◽  
Robert Rogers ◽  
Jacqueline Grupp-Phelan ◽  
...  

ABSTRACTThe pediatric Focused Assessment with Sonography for Trauma (FAST) is a sequence of ultrasound views rapidly performed by the clinician to diagnose hemorrhage. One limitation of FAST is inconsistent acquisition of required views. We sought to develop a deep learning model and classify FAST views using a heterogeneous dataset of pediatric FAST. This study of diagnostic test developed and tested a deep learning model for view classification of archived real-world pediatric FAST studies collected from two pediatric emergency departments. FAST frames were randomly distributed to training, validation, and test datasets in a 70:20:10 ratio; each patient was represented in only one dataset to maintain sample independence. The outcome was the prediction accuracy of the model in classifying FAST frames and video clips. FAST studies performed by 30 different clinicians from 699 injured children included 4,925 videos representing 1,062,612 frames from children who were a median of 9 years old. On test dataset, the overall view classification accuracy for the model was 93.4% (95% CI: 93.3-93.6) for frames and 97.8% (95% CI: 96.0-99.0) for video clips. Frames were correctly classified with an accuracy of 96.0% (95% CI: 95.9-96.1) for cardiac, 99.8% (95% CI: 99.8-99.8) for thoracic, 95.2% (95% CI: 95.0-95.3) for abdominal upper quadrants, and 95.9% (95% CI: 95.8-96.0) for suprapubic. A deep learning model can be developed to accurately classify pediatric FAST views. Accurate view classification is the important first step to support developing a consistent and accurate multi-stage deep learning model for pediatric FAST interpretation.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Kai-Yao Huang ◽  
Justin Bo-Kai Hsu ◽  
Tzong-Yi Lee

Abstract Succinylation is a type of protein post-translational modification (PTM), which can play important roles in a variety of cellular processes. Due to an increasing number of site-specific succinylated peptides obtained from high-throughput mass spectrometry (MS), various tools have been developed for computationally identifying succinylated sites on proteins. However, most of these tools predict succinylation sites based on traditional machine learning methods. Hence, this work aimed to carry out the succinylation site prediction based on a deep learning model. The abundance of MS-verified succinylated peptides enabled the investigation of substrate site specificity of succinylation sites through sequence-based attributes, such as position-specific amino acid composition, the composition of k-spaced amino acid pairs (CKSAAP), and position-specific scoring matrix (PSSM). Additionally, the maximal dependence decomposition (MDD) was adopted to detect the substrate signatures of lysine succinylation sites by dividing all succinylated sequences into several groups with conserved substrate motifs. According to the results of ten-fold cross-validation, the deep learning model trained using PSSM and informative CKSAAP attributes can reach the best predictive performance and also perform better than traditional machine-learning methods. Moreover, an independent testing dataset that truly did not exist in the training dataset was used to compare the proposed method with six existing prediction tools. The testing dataset comprised of 218 positive and 2621 negative instances, and the proposed model could yield a promising performance with 84.40% sensitivity, 86.99% specificity, 86.79% accuracy, and an MCC value of 0.489. Finally, the proposed method has been implemented as a web-based prediction tool (CNN-SuccSite), which is now freely accessible at http://csb.cse.yzu.edu.tw/CNN-SuccSite/.


Sign in / Sign up

Export Citation Format

Share Document