DeepHBSP: A Deep Learning Framework for Predicting Human Blood-Secretory Proteins Using Transfer Learning

2021 ◽  
Vol 36 (2) ◽  
pp. 234-247
Author(s):  
Wei Du ◽  
Yu Sun ◽  
Hui-Min Bao ◽  
Liang Chen ◽  
Ying Li ◽  
...  
2021 ◽  
Vol 4 ◽  
Author(s):  
Ruqian Hao ◽  
Khashayar Namdar ◽  
Lin Liu ◽  
Farzad Khalvati

Brain tumor is one of the leading causes of cancer-related death globally among children and adults. Precise classification of brain tumor grade (low-grade and high-grade glioma) at an early stage plays a key role in successful prognosis and treatment planning. With recent advances in deep learning, artificial intelligence–enabled brain tumor grading systems can assist radiologists in the interpretation of medical images within seconds. The performance of deep learning techniques is, however, highly depended on the size of the annotated dataset. It is extremely challenging to label a large quantity of medical images, given the complexity and volume of medical data. In this work, we propose a novel transfer learning–based active learning framework to reduce the annotation cost while maintaining stability and robustness of the model performance for brain tumor classification. In this retrospective research, we employed a 2D slice–based approach to train and fine-tune our model on the magnetic resonance imaging (MRI) training dataset of 203 patients and a validation dataset of 66 patients which was used as the baseline. With our proposed method, the model achieved area under receiver operating characteristic (ROC) curve (AUC) of 82.89% on a separate test dataset of 66 patients, which was 2.92% higher than the baseline AUC while saving at least 40% of labeling cost. In order to further examine the robustness of our method, we created a balanced dataset, which underwent the same procedure. The model achieved AUC of 82% compared with AUC of 78.48% for the baseline, which reassures the robustness and stability of our proposed transfer learning augmented with active learning framework while significantly reducing the size of training data.


2019 ◽  
Vol 275 ◽  
pp. 310-328 ◽  
Author(s):  
Joey Tianyi Zhou ◽  
Sinno Jialin Pan ◽  
Ivor W. Tsang

Author(s):  
Thanasekhar Balaiah ◽  
Timothy Jones Thomas Jeyadoss ◽  
Sri Sainee Thirumurugan ◽  
Rahul Chander Ravi

2021 ◽  
Vol 14 (3) ◽  
pp. 1231-1247
Author(s):  
Lokesh Singh ◽  
Rekh Ram Janghel ◽  
Satya Prakash Sahu

Purpose:Less contrast between lesions and skin, blurriness, darkened lesion images, presence of bubbles, hairs are the artifactsmakes the issue challenging in timely and accurate diagnosis of melanoma. In addition, huge similarity amid nevus lesions and melanoma pose complexity in investigating the melanoma even for the expert dermatologists. Method: In this work, a computer-aided diagnosis for melanoma detection (CAD-MD) system is designed and evaluated for the early and accurate detection of melanoma using thepotentials of machine, and deep learning-based transfer learning for the classification of pigmented skin lesions. The designed CAD-MD comprises of preprocessing, segmentation, feature extraction and classification. Experiments are conducted on dermoscopic images of PH2 and ISIC 2016 publicly available datasets using machine learning and deep learning-based transfer leaning models in twofold: first, with actual images, second, with augmented images. Results:Optimal results are obtained on augmented lesion images using machine learning and deep learning models on PH2 and ISIC-16 dataset. The performance of the CAD-MD system is evaluated using accuracy, sensitivity, specificity, dice coefficient, and jacquard Index. Conclusion:Empirical results show that using the potentials of deep learning-based transfer learning model VGG-16 has significantly outperformed all employed models with an accuracy of 99.1% on the PH2 dataset.


2020 ◽  
Vol 52 (1) ◽  
pp. 93-102
Author(s):  
P KHUWAJA ◽  
S.A KHOWAJA ◽  
B.R MEMON ◽  
M.A MEMON ◽  
G LAGHARI ◽  
...  

2020 ◽  
Author(s):  
Travis S. Johnson ◽  
Christina Y. Yu ◽  
Zhi Huang ◽  
Siwen Xu ◽  
Tongxin Wang ◽  
...  

AbstractWith the rapid advance of single cell sequencing techniques, single cell molecular data are quickly accumulated. However, there lacks a sound approach to properly integrate single cell data with the existing large amount of patient-level disease data. To address such need, we proposed DEGAS (Diagnostic Evidence GAuge of Single cells), a novel deep transfer-learning framework which allows for cellular and clinical information, including cell types, disease risk, and patient subtypes, to be cross-mapped between single cell and patient data, provided they share at least one common type of molecular data. We call such transferrable information “impressions”, which are generated by the deep learning models learned in the DEGAS framework. Using eight datasets from a wide range of diseases including Glioblastoma Multiforme (GBM), Alzheimer’s Disease (AD), and Multiple Myeloma (MM), we demonstrate the feasibility and broad applications of DEGAS in cross-mapping clinical and cellular information across disparate single cell and patient level transcriptomic datasets. Specifically, we correctly mapped clinically known GBM patient subtypes onto single cell data. We also identified previously known neuron loss from AD brains, then mapped the “impression” of AD risk to single cell data. Furthermore, we discovered novel differences in excitatory and inhibitory neuron loss in AD data. From the exploratory MM data, we identified differences in the malignancy of different CD138+ cellular subtypes based on “impressions” of relapse information transferred from MM patients. Through this work, we demonstrated that DEGAS is a powerful framework to cross-infer cellular and patient-level characteristics, which not only unites single cell and patient level transcriptomic data by identifying their latent links using the deep learning approach, but can also prioritize both patient subtypes and cellular subtypes for precision medicine.


Sign in / Sign up

Export Citation Format

Share Document