Deep Learning: a Promising Method for Histological Class Prediction of Breast Tumors in Mammography

Author(s):  
Raluca-Elena Nica ◽  
Mircea-Sebastian Șerbănescu ◽  
Lucian-Mihai Florescu ◽  
Georgiana-Cristiana Camen ◽  
Costin Teodor Streba ◽  
...  
2021 ◽  
Vol 56 (5) ◽  
pp. 241-252
Author(s):  
Shereen A. El-Aal ◽  
Neveen I. Ghali

Alzheimer's disease (AD) is an advanced and incurable neurodegenerative disease that causes progressive impairment of memory and cognitive functions due to the deterioration of brain cells. Early diagnosis is substantial to avoid permanent memory loss and develop treatments that will be subtracted in the future. Deep learning (DL) is a vital technique for medical imaging systems for AD diagnostics. The problem is multi-class classification seeking high accuracy. DL models have shown strong performance accuracy for multi-class prediction. In this paper, a proposed DL architecture is created to classify magnetic resonance imaging (MRI) to predict different stages of AD-based pre-trained Convolutional Neural Network (CNN) models and optimization algorithms. The proposed model architecture attempts to find the optimal subset of features to improve classification accuracy and reduce classification time. The pre-trained DL models, ResNet-101 and DenseNet-201, are utilized to extract features based on the last layer, and the Rival Genetic algorithm (RGA) and Pbest-Guide Binary Particle Swarm Optimization (PBPSO) are applied to select the optimal features. Then, the DL features and selected features are passed separately through created classifier for classification. The results are compared and analyzed by accuracy, performance metrics, and execution time. Experimental results showed that the most efficient accuracies were obtained by PBPSO selected features which reached 87.3% and 94.8% accuracy with less time of 46.7 sec, 32.7 sec for features based ResNet-101 and DenseNet-201, receptively.


Ultrasonics ◽  
2016 ◽  
Vol 72 ◽  
pp. 150-157 ◽  
Author(s):  
Qi Zhang ◽  
Yang Xiao ◽  
Wei Dai ◽  
Jingfeng Suo ◽  
Congzhi Wang ◽  
...  

2021 ◽  
Vol 10 (5) ◽  
pp. 953
Author(s):  
Peng Guo ◽  
Zhiyun Xue ◽  
Jose Jeronimo ◽  
Julia C. Gage ◽  
Kanan T. Desai ◽  
...  

Uterine cervical cancer is a leading cause of women’s mortality worldwide. Cervical tissue ablation is an effective surgical excision of high grade lesions that are determined to be precancerous. Our prior work on the Automated Visual Examination (AVE) method demonstrated a highly effective technique to analyze digital images of the cervix for identifying precancer. Next step would be to determine if she is treatable using ablation. However, not all women are eligible for the therapy due to cervical characteristics. We present a machine learning algorithm that uses a deep learning object detection architecture to determine if a cervix is eligible for ablative treatment based on visual characteristics presented in the image. The algorithm builds on the well-known RetinaNet architecture to derive a simpler and novel architecture in which the last convolutional layer is constructed by upsampling and concatenating specific RetinaNet pretrained layers, followed by an output module consisting of a Global Average Pooling (GAP) layer and a fully connected layer. To explain the recommendation of the deep learning algorithm and determine if it is consistent with lesion presentation on the cervical anatomy, we visualize classification results using two techniques: our (i) Class-selective Relevance Map (CRM), which has been reported earlier, and (ii) Class Activation Map (CAM). The class prediction heatmaps are evaluated by a gynecologic oncologist with more than 20 years of experience. Based on our observation and the expert’s opinion, the customized architecture not only outperforms the baseline RetinaNet network in treatability classification, but also provides insights about the features and regions considered significant by the network toward explaining reasons for treatment recommendation. Furthermore, by investigating the heatmaps on Gaussian-blurred images that serve as surrogates for out-of-focus cervical pictures we demonstrate the effect of image quality degradation on cervical treatability classification and underscoring the need for using images with good visual quality.


2020 ◽  
Vol 22 (1) ◽  
Author(s):  
Mustafa I. Jaber ◽  
Bing Song ◽  
Clive Taylor ◽  
Charles J. Vaske ◽  
Stephen C. Benz ◽  
...  

2022 ◽  
Author(s):  
Melek Tassoker ◽  
Muhammet Usame Ozic ◽  
Fatma Yuce

Abstract Objective: The aim of the present study was to predict osteoporosis on panoramic radiographs of women over 50 years of age through deep learning algorithms.Method: Panoramic radiographs of 744 female patients over 50 years of age were labeled as C1, C2, and C3 depending on mandibular cortical index (MCI). According to this index; C1: presence of a smooth and sharp mandibular cortex (normal); C2: resorption cavities at endosteal margin and 1 to 3-layer stratification (osteopenia); C3: completely porotic cortex (osteoporosis). The data of the present study were reviewed in different categories including C1-C2-C3, C1-C2, C1-C3 and C1-(C2+C3) as two-class and three-class prediction. The data were separated as 20% random test data; and the remaining data were used for training and validation with 5-fold cross-validation. AlexNET, GoogleNET, ResNET-50, SqueezeNET, and ShuffleNET deep learning models are trained through the transfer learning method. The results were evaluated by performance criteria including accuracy, sensitivity, specificity, F1-score, AUC and training duration. Findings: The dataset C1-C2-C3 has an accuracy rate of 81.14% with AlexNET; the dataset C1-C2 has an accuracy rate of 88.94% with GoogleNET; the dataset C1-C3 has an accuracy rate of 98.56% with AlexNET; and the dataset C1-(C2+C3) has an accuracy rate of 92.79% with GoogleNET. Conclusion: The highest accuracy was obtained in differentiation of C3 and C1 where osseous structure characteristics change significantly. Since the C2 score represent the intermediate stage (osteopenia), structural characteristics of the bone present behaviors closer to C1 and C3 scores. Therefore, the data set including the C2 score provided relatively lower accuracy results.


2020 ◽  
Author(s):  
Chen Liu ◽  
Nanyan Zhu ◽  
Dipika Sikka ◽  
Xinyang Feng ◽  
Haoran Sun ◽  
...  

Abstract While MRI contrast agents such as those based on Gadolinium are needed to enhance the detection of structural and functional brain lesions, there are rising concerns over their safety. Here, we hypothesize that a deep learning model, trained using quantitative steady-state contrast-enhanced MRI datasets in mice and humans, could generate contrast-equivalent information from a single non-contrast MRI scan. The model was first trained, optimized, and validated in mice. It was then transferred and adapted to human data, and we find that it can substitute Gadolinium-based contrast agents for detecting functional lesions caused by aging, Schizophrenia, or Alzheimer’s disease, and, for enhancing structural lesions caused by brain or breast tumors. Since derived from a commonly-acquired MRI, this framework has the potential for broad clinical utility and can be applied retrospectively to research scans across a host of diseases.


2019 ◽  
Author(s):  
Anupama Jha ◽  
Joseph K. Aicher ◽  
Deependra Singh ◽  
Yoseph Barash

AbstractDespite the success and fast adaptation of deep learning models in a wide range of fields, lack of interpretability remains an issue, especially in biomedical domains. A recent promising method to address this limitation is Integrated Gradients (IG), which identifies features associated with a prediction by traversing a linear path from a baseline to a sample. We extend IG with nonlinear paths, embedding in latent space, alternative baselines, and a framework to identify important features which make it suitable for interpretation of deep models for genomics.


Sign in / Sign up

Export Citation Format

Share Document