Combined Transfer Learning and Test-Time Augmentation Improves Convolutional Neural Network-Based Semantic Segmentation of Prostate Cancer from Multi-Parametric MR Images

Author(s):  
David Hoar ◽  
Peter Q. Lee ◽  
Alessandro Guida ◽  
Steven Patterson ◽  
Chris V. Bowen ◽  
...  
2021 ◽  
Vol 8 (1) ◽  
pp. 29
Author(s):  
Sandra Pozzer ◽  
Marcos Paulo Vieira de Souza ◽  
Bata Hena ◽  
Reza Khoshkbary Rezayiye ◽  
Setayesh Hesam ◽  
...  

This study investigates the semantic segmentation of common concrete defects when using different imaging modalities. One pre-trained Convolutional Neural Network (CNN) model was trained via transfer learning and tested to detect concrete defect indications, such as cracks, spalling, and internal voids. The model’s performance was compared using datasets of visible, thermal, and fused images. The data were collected from four different concrete structures and built using four infrared cameras that have different sensitivities and resolutions, with imaging campaigns conducted during autumn, summer, and winter periods. Although specific defects can be detected in monomodal images, the results demonstrate that a larger number of defect classes can be accurately detected using multimodal fused images with the same viewpoint and resolution of the single-sensor image.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Sahar Gull ◽  
Shahzad Akbar ◽  
Habib Ullah Khan

Brain tumor is a fatal disease, caused by the growth of abnormal cells in the brain tissues. Therefore, early and accurate detection of this disease can save patient’s life. This paper proposes a novel framework for the detection of brain tumor using magnetic resonance (MR) images. The framework is based on the fully convolutional neural network (FCNN) and transfer learning techniques. The proposed framework has five stages which are preprocessing, skull stripping, CNN-based tumor segmentation, postprocessing, and transfer learning-based brain tumor binary classification. In preprocessing, the MR images are filtered to eliminate the noise and are improve the contrast. For segmentation of brain tumor images, the proposed CNN architecture is used, and for postprocessing, the global threshold technique is utilized to eliminate small nontumor regions that enhanced segmentation results. In classification, GoogleNet model is employed on three publicly available datasets. The experimental results depict that the proposed method is achieved average accuracies of 96.50%, 97.50%, and 98% for segmentation and 96.49%, 97.31%, and 98.79% for classification of brain tumor on BRATS2018, BRATS2019, and BRATS2020 datasets, respectively. The outcomes demonstrate that the proposed framework is effective and efficient that attained high performance on BRATS2020 dataset than the other two datasets. According to the experimentation results, the proposed framework outperforms other recent studies in the literature. In addition, this research will uphold doctors and clinicians for automatic diagnosis of brain tumor disease.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


Sign in / Sign up

Export Citation Format

Share Document