scholarly journals Development of a Fully Automated Glioma-Grading Pipeline Using Post-Contrast T1-Weighted Images Combined with Cloud-Based 3D Convolutional Neural Network

2021 ◽  
Vol 11 (11) ◽  
pp. 5118
Author(s):  
Hiroto Yamashiro ◽  
Atsushi Teramoto ◽  
Kuniaki Saito ◽  
Hiroshi Fujita

Glioma is the most common type of brain tumor, and its grade influences its treatment policy and prognosis. Therefore, artificial-intelligence-based tumor grading methods have been studied. However, in most studies, two-dimensional (2D) analysis and manual tumor-region extraction were performed. Additionally, deep learning research that uses medical images experiences difficulties in collecting image data and preparing hardware, thus hindering its widespread use. Therefore, we developed a 3D convolutional neural network (3D CNN) pipeline for realizing a fully automated glioma-grading system by using the pretrained Clara segmentation model provided by NVIDIA and our original classification model. In this method, the brain tumor region was extracted using the Clara segmentation model, and the volume of interest (VOI) created using this extracted region was assigned to a grading 3D CNN and classified as either grade II, III, or IV. Through evaluation using 46 regions, the grading accuracy of all tumors was 91.3%, which was comparable to that of the method using multi-sequence. The proposed pipeline scheme may enable the creation of a fully automated glioma-grading pipeline in a single sequence by combining the pretrained 3D CNN and our original 3D CNN.

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Yongchao Jiang ◽  
Mingquan Ye ◽  
Daobin Huang ◽  
Xiaojie Lu

Automatic and accurate segmentation of brain tumors plays an important role in the diagnosis and treatment of brain tumors. In order to improve the accuracy of brain tumor segmentation, an improved multimodal MRI brain tumor segmentation algorithm based on U-net is proposed in this paper. In the original U-net, the contracting path uses the pooling layer to reduce the resolution of the feature image and increase the receptive field. In the expanding path, the up sampling is used to restore the size of the feature image. In this process, some details of the image will be lost, leading to low segmentation accuracy. This paper proposes an improved convolutional neural network named AIU-net (Atrous-Inception U-net). In the encoder of U-net, A-inception (Atrous-inception) module is introduced to replace the original convolution block. The A-inception module is an inception structure with atrous convolution, which increases the depth and width of the network and can expand the receptive field without adding additional parameters. In order to capture the multiscale features, the atrous spatial pyramid pooling module (ASPP) is introduced. The experimental results on the BraTS (the multimodal brain tumor segmentation challenge) dataset show that the dice score obtained by this method is 0.93 for the enhancing tumor region, 0.86 for the whole tumor region, and 0.92 for the tumor core region, and the segmentation accuracy is improved.


2018 ◽  
Vol 2018 ◽  
pp. 1-14 ◽  
Author(s):  
Shaoguo Cui ◽  
Lei Mao ◽  
Jingfeng Jiang ◽  
Chang Liu ◽  
Shuyu Xiong

Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice.


Healthcare ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 153 ◽  
Author(s):  
Francisco Javier Díaz-Pernas ◽  
Mario Martínez-Zarzuela ◽  
Míriam Antón-Rodríguez ◽  
David González-Ortega

In this paper, we present a fully automatic brain tumor segmentation and classification model using a Deep Convolutional Neural Network that includes a multiscale approach. One of the differences of our proposal with respect to previous works is that input images are processed in three spatial scales along different processing pathways. This mechanism is inspired in the inherent operation of the Human Visual System. The proposed neural model can analyze MRI images containing three types of tumors: meningioma, glioma, and pituitary tumor, over sagittal, coronal, and axial views and does not need preprocessing of input images to remove skull or vertebral column parts in advance. The performance of our method on a publicly available MRI image dataset of 3064 slices from 233 patients is compared with previously classical machine learning and deep learning published methods. In the comparison, our method remarkably obtained a tumor classification accuracy of 0.973, higher than the other approaches using the same database.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4916
Author(s):  
Ali Usman Gondal ◽  
Muhammad Imran Sadiq ◽  
Tariq Ali ◽  
Muhammad Irfan ◽  
Ahmad Shaf ◽  
...  

Urbanization is a big concern for both developed and developing countries in recent years. People shift themselves and their families to urban areas for the sake of better education and a modern lifestyle. Due to rapid urbanization, cities are facing huge challenges, one of which is waste management, as the volume of waste is directly proportional to the people living in the city. The municipalities and the city administrations use the traditional wastage classification techniques which are manual, very slow, inefficient and costly. Therefore, automatic waste classification and management is essential for the cities that are being urbanized for the better recycling of waste. Better recycling of waste gives the opportunity to reduce the amount of waste sent to landfills by reducing the need to collect new raw material. In this paper, the idea of a real-time smart waste classification model is presented that uses a hybrid approach to classify waste into various classes. Two machine learning models, a multilayer perceptron and multilayer convolutional neural network (ML-CNN), are implemented. The multilayer perceptron is used to provide binary classification, i.e., metal or non-metal waste, and the CNN identifies the class of non-metal waste. A camera is placed in front of the waste conveyor belt, which takes a picture of the waste and classifies it. Upon successful classification, an automatic hand hammer is used to push the waste into the assigned labeled bucket. Experiments were carried out in a real-time environment with image segmentation. The training, testing, and validation accuracy of the purposed model was 0.99% under different training batches with different input features.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 816
Author(s):  
Pingping Liu ◽  
Xiaokang Yang ◽  
Baixin Jin ◽  
Qiuzhan Zhou

Diabetic retinopathy (DR) is a common complication of diabetes mellitus (DM), and it is necessary to diagnose DR in the early stages of treatment. With the rapid development of convolutional neural networks in the field of image processing, deep learning methods have achieved great success in the field of medical image processing. Various medical lesion detection systems have been proposed to detect fundus lesions. At present, in the image classification process of diabetic retinopathy, the fine-grained properties of the diseased image are ignored and most of the retinopathy image data sets have serious uneven distribution problems, which limits the ability of the network to predict the classification of lesions to a large extent. We propose a new non-homologous bilinear pooling convolutional neural network model and combine it with the attention mechanism to further improve the network’s ability to extract specific features of the image. The experimental results show that, compared with the most popular fundus image classification models, the network model we proposed can greatly improve the prediction accuracy of the network while maintaining computational efficiency.


2021 ◽  
Vol 11 (9) ◽  
pp. 4292
Author(s):  
Mónica Y. Moreno-Revelo ◽  
Lorena Guachi-Guachi ◽  
Juan Bernardo Gómez-Mendoza ◽  
Javier Revelo-Fuelagán ◽  
Diego H. Peluffo-Ordóñez

Automatic crop identification and monitoring is a key element in enhancing food production processes as well as diminishing the related environmental impact. Although several efficient deep learning techniques have emerged in the field of multispectral imagery analysis, the crop classification problem still needs more accurate solutions. This work introduces a competitive methodology for crop classification from multispectral satellite imagery mainly using an enhanced 2D convolutional neural network (2D-CNN) designed at a smaller-scale architecture, as well as a novel post-processing step. The proposed methodology contains four steps: image stacking, patch extraction, classification model design (based on a 2D-CNN architecture), and post-processing. First, the images are stacked to increase the number of features. Second, the input images are split into patches and fed into the 2D-CNN model. Then, the 2D-CNN model is constructed within a small-scale framework, and properly trained to recognize 10 different types of crops. Finally, a post-processing step is performed in order to reduce the classification error caused by lower-spatial-resolution images. Experiments were carried over the so-named Campo Verde database, which consists of a set of satellite images captured by Landsat and Sentinel satellites from the municipality of Campo Verde, Brazil. In contrast to the maximum accuracy values reached by remarkable works reported in the literature (amounting to an overall accuracy of about 81%, a f1 score of 75.89%, and average accuracy of 73.35%), the proposed methodology achieves a competitive overall accuracy of 81.20%, a f1 score of 75.89%, and an average accuracy of 88.72% when classifying 10 different crops, while ensuring an adequate trade-off between the number of multiply-accumulate operations (MACs) and accuracy. Furthermore, given its ability to effectively classify patches from two image sequences, this methodology may result appealing for other real-world applications, such as the classification of urban materials.


Sign in / Sign up

Export Citation Format

Share Document