scholarly journals Automatic Meningioma Segmentation and Grading Prediction: A Hybrid Deep-Learning Method

2021 ◽  
Vol 11 (8) ◽  
pp. 786
Author(s):  
Chaoyue Chen ◽  
Yisong Cheng ◽  
Jianfeng Xu ◽  
Ting Zhang ◽  
Xin Shu ◽  
...  

The purpose of this study was to determine whether a deep-learning-based assessment system could facilitate preoperative grading of meningioma. This was a retrospective study conducted at two institutions covering 643 patients. The system, designed with a cascade network structure, was developed using deep-learning technology for automatic tumor detection, visual assessment, and grading prediction. Specifically, a modified U-Net convolutional neural network was first established to segment tumor images. Subsequently, the segmentations were introduced into rendering algorithms for spatial reconstruction and another DenseNet convolutional neural network for grading prediction. The trained models were integrated as a system, and the robustness was tested based on its performance on an external dataset from the second institution involving different magnetic resonance imaging platforms. The results showed that the segment model represented a noteworthy performance with dice coefficients of 0.920 ± 0.009 in the validation group. With accurate segmented tumor images, the rendering model delicately reconstructed the tumor body and clearly displayed the important intracranial vessels. The DenseNet model also achieved high accuracy with an area under the curve of 0.918 ± 0.006 and accuracy of 0.901 ± 0.039 when classifying tumors into low-grade and high-grade meningiomas. Moreover, the system exhibited good performance on the external validation dataset.

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


Endoscopy ◽  
2019 ◽  
Vol 51 (12) ◽  
pp. 1121-1129 ◽  
Author(s):  
Bum-Joo Cho ◽  
Chang Seok Bang ◽  
Se Woo Park ◽  
Young Joo Yang ◽  
Seung In Seo ◽  
...  

Abstract Background Visual inspection, lesion detection, and differentiation between malignant and benign features are key aspects of an endoscopist’s role. The use of machine learning for the recognition and differentiation of images has been increasingly adopted in clinical practice. This study aimed to establish convolutional neural network (CNN) models to automatically classify gastric neoplasms based on endoscopic images. Methods Endoscopic white-light images of pathologically confirmed gastric lesions were collected and classified into five categories: advanced gastric cancer, early gastric cancer, high grade dysplasia, low grade dysplasia, and non-neoplasm. Three pretrained CNN models were fine-tuned using a training dataset. The classifying performance of the models was evaluated using a test dataset and a prospective validation dataset. Results A total of 5017 images were collected from 1269 patients, among which 812 images from 212 patients were used as the test dataset. An additional 200 images from 200 patients were collected and used for prospective validation. For the five-category classification, the weighted average accuracy of the Inception-Resnet-v2 model reached 84.6 %. The mean area under the curve (AUC) of the model for differentiating gastric cancer and neoplasm was 0.877 and 0.927, respectively. In prospective validation, the Inception-Resnet-v2 model showed lower performance compared with the endoscopist with the best performance (five-category accuracy 76.4 % vs. 87.6 %; cancer 76.0 % vs. 97.5 %; neoplasm 73.5 % vs. 96.5 %; P  < 0.001). However, there was no statistical difference between the Inception-Resnet-v2 model and the endoscopist with the worst performance in the differentiation of gastric cancer (accuracy 76.0 % vs. 82.0 %) and neoplasm (AUC 0.776 vs. 0.865). Conclusion The evaluated deep-learning models have the potential for clinical application in classifying gastric cancer or neoplasm on endoscopic white-light images.


2021 ◽  
Vol 2137 (1) ◽  
pp. 012056
Author(s):  
Hongli Ma ◽  
Fang Xie ◽  
Tao Chen ◽  
Lei Liang ◽  
Jie Lu

Abstract Convolutional neural network is a very important research direction in deep learning technology. According to the current development of convolutional network, in this paper, convolutional neural networks are induced. Firstly, this paper induces the development process of convolutional neural network; then it introduces the structure of convolutional neural network and some typical convolutional neural networks. Finally, several examples of the application of deep learning is introduced.


Author(s):  
Gauri Jain ◽  
Manisha Sharma ◽  
Basant Agarwal

This article describes how spam detection in the social media text is becoming increasing important because of the exponential increase in the spam volume over the network. It is challenging, especially in case of text within the limited number of characters. Effective spam detection requires more number of efficient features to be learned. In the current article, the use of a deep learning technology known as a convolutional neural network (CNN) is proposed for spam detection with an added semantic layer on the top of it. The resultant model is known as a semantic convolutional neural network (SCNN). A semantic layer is composed of training the random word vectors with the help of Word2vec to get the semantically enriched word embedding. WordNet and ConceptNet are used to find the word similar to a given word, in case it is missing in the word2vec. The architecture is evaluated on two corpora: SMS Spam dataset (UCI repository) and Twitter dataset (Tweets scrapped from public live tweets). The authors' approach outperforms the-state-of-the-art results with 98.65% accuracy on SMS spam dataset and 94.40% accuracy on Twitter dataset.


2020 ◽  
Vol 21 (Supplement_1) ◽  
Author(s):  
A Karuzas ◽  
K Sablauskas ◽  
D Verikas ◽  
E Teleisyte ◽  
L Skrodenis ◽  
...  

Abstract INTRODUCTION Deep learning (DL) has been of increasing use in the field of echocardiographic cardiology. The importance of segmentation and recognition of different heart chambers was already presented in different studies. However, there are no studies made regarding the functional heart measurements. Even though, functional measurements of right ventricle (RV) remains "dark side of the moon", no doubtfully severity of RV dysfunction influences the worse outcomes. PURPOSE To evaluate DL for recognition of geometrical features of RV and measurement of RV fractional area change (FAC). METHODS A total of 896 end-systolic and end-diastolic frames from 129 patients (with various indications for the study) were used to train and validate the neural networks. Raw pixel data was extracted from EPIQ 7G (Philips) imaging platform. All of the images were from 2D echocardiography apical four chamber views. RV was annotated in each image, with 1716 images used for training and 180 for validation. We used the state of art mask regional convolutional neural network (MR-CNN) and attention U-net convolutional neural network models for the RV segmentation task. Intersection over Union (IoU) was used as the primary metric for model evaluation. IoU measures the number of pixels common between the target and the prediction masks divided by the total number of pixels present across both masks. Additionally FAC was calculated using frames with minimal and maximal segmented area by the network. RESULTS U-Net architecture demonstrated considerably faster training compared to MR-CNN with time per training step of 85 ms and 750 ms for U-Net and MR-CNN, respectively (p &lt; 0.001). MR-CNN and U-Net had an IoU of 0.91 and 0.89 respectively on validation dataset which corresponds to good performance of the model and there was no significant difference between the different neural networks (p = 0.876). Comparing the evaluation of FAC by physician and U-Net the mean squared difference was 12% when using minimum and maximum right ventricle area detected by the network. CONCLUSION With small dataset deep learning give us ability to recognize RV and measure RV FAC in apical four chamber view with high accuracy. This method offers assessment of RV to become daily used in the cardiologist practice, moreover, in the near future automated measurements will allow to reduce the need of observer manual evaluation. Improvements can be made in FAC calculation by also improving techniques for end-systolic and end-diastolic frame detection.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Xiang Liu ◽  
Chao Han ◽  
He Wang ◽  
Jingyun Wu ◽  
Yingpu Cui ◽  
...  

Abstract Background Accurate segmentation of pelvic bones is an initial step to achieve accurate detection and localisation of pelvic bone metastases. This study presents a deep learning-based approach for automated segmentation of normal pelvic bony structures in multiparametric magnetic resonance imaging (mpMRI) using a 3D convolutional neural network (CNN). Methods This retrospective study included 264 pelvic mpMRI data obtained between 2018 and 2019. The manual annotations of pelvic bony structures (which included lumbar vertebra, sacrococcyx, ilium, acetabulum, femoral head, femoral neck, ischium, and pubis) on diffusion-weighted imaging (DWI) and apparent diffusion coefficient (ADC) images were used to create reference standards. A 3D U-Net CNN was employed for automatic pelvic bone segmentation. Additionally, 60 mpMRI data from 2020 were included and used to evaluate the model externally. Results The CNN achieved a high Dice similarity coefficient (DSC) average in both testing (0.80 [DWI images] and 0.85 [ADC images]) and external (0.79 [DWI images] and 0.84 [ADC images]) validation sets. Pelvic bone volumes measured with manual and CNN-predicted segmentations were highly correlated (R2 value of 0.84–0.97) and in close agreement (mean bias of 2.6–4.5 cm3). A SCORE system was designed to qualitatively evaluate the model for which both testing and external validation sets achieved high scores in terms of both qualitative evaluation and concordance between two readers (ICC = 0.904; 95% confidence interval: 0.871–0.929). Conclusions A deep learning-based method can achieve automated pelvic bone segmentation on DWI and ADC images with suitable quantitative and qualitative performance.


2021 ◽  
Author(s):  
Chao-Hsin Chen ◽  
Kuo-Fong Tung ◽  
Wen-Chang Lin

AbstractBackgroundWith the advancement of NGS platform, large numbers of human variations and SNPs are discovered in human genomes. It is essential to utilize these massive nucleotide variations for the discovery of disease genes and human phenotypic traits. There are new challenges in utilizing such large numbers of nucleotide variants for polygenic disease studies. In recent years, deep-learning based machine learning approaches have achieved great successes in many areas, especially image classifications. In this preliminary study, we are exploring the deep convolutional neural network algorithm in genome-wide SNP images for the classification of human populations.ResultsWe have processed the SNP information from more than 2,500 samples of 1000 genome project. Five major human races were used for classification categories. We first generated SNP image graphs of chromosome 22, which contained about one million SNPs. By using the residual network (ResNet 50) pipeline in CNN algorithm, we have successfully obtained classification models to classify the validation dataset. F1 scores of the trained CNN models are 95 to 99%, and validation with additional separate 150 samples indicates a 95.8% accuracy of the CNN model. Misclassification was often observed between the American and European categories, which could attribute to the ancestral origins. We further attempted to use SNP image graphs in reduced color representations or images generated by spiral shapes, which also provided good prediction accuracy. We then tried to use the SNP image graphs from chromosome 20, almost all CNN models failed to classify the human race category successfully, except the African samples.ConclusionsWe have developed a human race prediction model with deep convolutional neural network. It is feasible to use the SNP image graph for the classification of individual genomes.


Sign in / Sign up

Export Citation Format

Share Document