NIMG-40. MRI-BASED ESTIMATION OF THE ABUNDANCE OF IMMUNOHISTOCHEMISTRY MARKERS IN GBM BRAIN USING DEEP LEARNING

2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi137-vi138
Author(s):  
Sara Ranjbar ◽  
Kyle Singleton ◽  
Deborah Boyett ◽  
Michael Argenziano ◽  
Jack Grinband ◽  
...  

Abstract Glioblastoma (GBM) is a devastating primary brain tumor known for its heterogeneity with a median survival of 15 months. Clinical imaging remains the primary modality to assess brain tumor response, but it is nearly impossible to distinguish between tumor growth and treatment response. Ki67 is a marker of active cell proliferation that shows inter- and intra-patient heterogeneity and should change under many therapies. In this work, we assessed the utility of a semi-supervised deep learning approach for regionally predicting high-vs-low Ki67 in GBM patients based on MRI. We used both labeled and unlabeled datasets to train the model. Labeled data included 114 MRI-localized biopsies from 43 unique GBM patients with available immunohistochemistry Ki67 labels. Unlabeled data included nine repeat routine pretreatment paired scans of newly-diagnosed GBM patients acquired within three days. Data augmentation techniques were utilized to enhance the size of our data and increase generalizability. Data was split between training, validation, and testing sets using 65-15-20 percent ratios. Model inputs were 16x16x3 patches around biopsies on T1Gd and T2 MRIs for labeled data, and around randomly selected patches inside the T2 abnormal region for unlabeled data. The network was a 4-conv layered VGG-inspired architecture. Training objective was accurate prediction of Ki67 in labeled patches and consistency in predictions across repeat unlabeled patches. We measured final model accuracy on held-out test samples. Our promising preliminary results suggest potential for deep learning in deconvolving the spatial heterogeneity of proliferative GBM subpopulations. If successful, this model can provide a non-invasive readout of cell proliferation and reveal the effectiveness of a given cytotoxic therapy dynamically during the patient's routine follow up. Further, the spatial resolution of our approach provides insights into the intra-tumoral heterogeneity of response which can be related to heterogeneity in localization of therapies (e.g. radiation therapy, drug dose heterogeneity).

2021 ◽  
Author(s):  
Loay Hassan ◽  
Mohamed Abedl-Nasser ◽  
Adel Saleh ◽  
Domenec Puig

Digital breast tomosynthesis (DBT) is one of the powerful breast cancer screening technologies. DBT can improve the ability of radiologists to detect breast cancer, especially in the case of dense breasts, where it beats mammography. Although many automated methods were proposed to detect breast lesions in mammographic images, very few methods were proposed for DBT due to the unavailability of enough annotated DBT images for training object detectors. In this paper, we present fully automated deep-learning breast lesion detection methods. Specifically, we study the effectiveness of two data augmentation techniques (channel replication and channel-concatenation) with five state-of-the-art deep learning detection models. Our preliminary results on a challenging publically available DBT dataset showed that the channel-concatenation data augmentation technique can significantly improve the breast lesion detection results for deep learning-based breast lesion detectors.


2020 ◽  
Vol 13 (4) ◽  
pp. 389-406
Author(s):  
Jiten Chaudhary ◽  
Rajneesh Rani ◽  
Aman Kamboj

PurposeBrain tumor is one of the most dangerous and life-threatening disease. In order to decide the type of tumor, devising a treatment plan and estimating the overall survival time of the patient, accurate segmentation of tumor region from images is extremely important. The process of manual segmentation is very time-consuming and prone to errors; therefore, this paper aims to provide a deep learning based method, that automatically segment the tumor region from MR images.Design/methodology/approachIn this paper, the authors propose a deep neural network for automatic brain tumor (Glioma) segmentation. Intensity normalization and data augmentation have been incorporated as pre-processing steps for the images. The proposed model is trained on multichannel magnetic resonance imaging (MRI) images. The model outputs high-resolution segmentations of brain tumor regions in the input images.FindingsThe proposed model is evaluated on benchmark BRATS 2013 dataset. To evaluate the performance, the authors have used Dice score, sensitivity and positive predictive value (PPV). The superior performance of the proposed model is validated by training very popular UNet model in the similar conditions. The results indicate that proposed model has obtained promising results and is effective for segmentation of Glioma regions in MRI at a clinical level.Practical implicationsThe model can be used by doctors to identify the exact location of the tumorous region.Originality/valueThe proposed model is an improvement to the UNet model. The model has fewer layers and a smaller number of parameters in comparison to the UNet model. This helps the network to train over databases with fewer images and gives superior results. Moreover, the information of bottleneck feature learned by the network has been fused with skip connection path to enrich the feature map.


Author(s):  
Kottilingam Kottursamy

The role of facial expression recognition in social science and human-computer interaction has received a lot of attention. Deep learning advancements have resulted in advances in this field, which go beyond human-level accuracy. This article discusses various common deep learning algorithms for emotion recognition, all while utilising the eXnet library for achieving improved accuracy. Memory and computation, on the other hand, have yet to be overcome. Overfitting is an issue with large models. One solution to this challenge is to reduce the generalization error. We employ a novel Convolutional Neural Network (CNN) named eXnet to construct a new CNN model utilising parallel feature extraction. The most recent eXnet (Expression Net) model improves on the previous model's inaccuracy while having many fewer parameters. Data augmentation techniques that have been in use for decades are being utilized with the generalized eXnet. It employs effective ways to reduce overfitting while maintaining overall size under control.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 73
Author(s):  
Kuldoshbay Avazov ◽  
Mukhriddin Mukhiddinov ◽  
Fazliddin Makhmudov ◽  
Young Im Cho

In the construction of new smart cities, traditional fire-detection systems can be replaced with vision-based systems to establish fire safety in society using emerging technologies, such as digital cameras, computer vision, artificial intelligence, and deep learning. In this study, we developed a fire detector that accurately detects even small sparks and sounds an alarm within 8 s of a fire outbreak. A novel convolutional neural network was developed to detect fire regions using an enhanced You Only Look Once (YOLO) v4network. Based on the improved YOLOv4 algorithm, we adapted the network to operate on the Banana Pi M3 board using only three layers. Initially, we examined the originalYOLOv4 approach to determine the accuracy of predictions of candidate fire regions. However, the anticipated results were not observed after several experiments involving this approach to detect fire accidents. We improved the traditional YOLOv4 network by increasing the size of the training dataset based on data augmentation techniques for the real-time monitoring of fire disasters. By modifying the network structure through automatic color augmentation, reducing parameters, etc., the proposed method successfully detected and notified the incidence of disastrous fires with a high speed and accuracy in different weather environments—sunny or cloudy, day or night. Experimental results revealed that the proposed method can be used successfully for the protection of smart cities and in monitoring fires in urban areas. Finally, we compared the performance of our method with that of recently reported fire-detection approaches employing widely used performance matrices to test the fire classification results achieved.


2021 ◽  
Vol 11 (9) ◽  
pp. 842
Author(s):  
Shruti Atul Mali ◽  
Abdalla Ibrahim ◽  
Henry C. Woodruff ◽  
Vincent Andrearczyk ◽  
Henning Müller ◽  
...  

Radiomics converts medical images into mineable data via a high-throughput extraction of quantitative features used for clinical decision support. However, these radiomic features are susceptible to variation across scanners, acquisition protocols, and reconstruction settings. Various investigations have assessed the reproducibility and validation of radiomic features across these discrepancies. In this narrative review, we combine systematic keyword searches with prior domain knowledge to discuss various harmonization solutions to make the radiomic features more reproducible across various scanners and protocol settings. Different harmonization solutions are discussed and divided into two main categories: image domain and feature domain. The image domain category comprises methods such as the standardization of image acquisition, post-processing of raw sensor-level image data, data augmentation techniques, and style transfer. The feature domain category consists of methods such as the identification of reproducible features and normalization techniques such as statistical normalization, intensity harmonization, ComBat and its derivatives, and normalization using deep learning. We also reflect upon the importance of deep learning solutions for addressing variability across multi-centric radiomic studies especially using generative adversarial networks (GANs), neural style transfer (NST) techniques, or a combination of both. We cover a broader range of methods especially GANs and NST methods in more detail than previous reviews.


Sign in / Sign up

Export Citation Format

Share Document