A deep learning approach to predict inter-omics interactions in multi-layer networks

Author(s):  
Niloofar Borhani ◽  
Jafar Ghaisari ◽  
Maryam Abedi ◽  
Marzieh Kamali ◽  
Yousof Gheisari

Abstract Despite enormous achievements in production of high throughput datasets, constructing comprehensive maps of interactions remains a major challenge. The lack of sufficient experimental evidence on interactions is more significant for heterogeneous molecular types. Hence, developing strategies to predict inter-omics connections is essential to construct holistic maps of disease. Here, Data Integration with Deep Learning (DIDL), a novel nonlinear deep learning method is proposed to predict inter-omics interactions. It consists of an encoder that automatically extracts features for biomolecules according to existing interactions, and a decoder that predicts novel interactions. The applicability of DIDL is assessed with different networks namely drug-target protein, transcription factor-DNA element, and miRNA-mRNA. Also, the validity of novel predictions is assessed by literature surveys. Furthermore, DIDL outperformed state-of-the-art methods. Area under the curve, and area under the precision-recall curve for all three networks were more than 0.85 and 0.83, respectively. DIDL has several advantages like automatic feature extraction from raw data, end-to-end training, and robustness to sparsity. In addition, tensor decomposition structure, predictions solely based on existing interactions and biochemical data independence makes DIDL applicable for a variety of biological networks. DIDL paves the way to understand the underlying mechanisms of complex disorders through constructing integrative networks.

2021 ◽  
Vol 22 (6) ◽  
pp. 2903
Author(s):  
Noam Auslander ◽  
Ayal B. Gussow ◽  
Eugene V. Koonin

The exponential growth of biomedical data in recent years has urged the application of numerous machine learning techniques to address emerging problems in biology and clinical research. By enabling the automatic feature extraction, selection, and generation of predictive models, these methods can be used to efficiently study complex biological systems. Machine learning techniques are frequently integrated with bioinformatic methods, as well as curated databases and biological networks, to enhance training and validation, identify the best interpretable features, and enable feature and model investigation. Here, we review recently developed methods that incorporate machine learning within the same framework with techniques from molecular evolution, protein structure analysis, systems biology, and disease genomics. We outline the challenges posed for machine learning, and, in particular, deep learning in biomedicine, and suggest unique opportunities for machine learning techniques integrated with established bioinformatics approaches to overcome some of these challenges.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1579
Author(s):  
Dongqi Wang ◽  
Qinghua Meng ◽  
Dongming Chen ◽  
Hupo Zhang ◽  
Lisheng Xu

Automatic detection of arrhythmia is of great significance for early prevention and diagnosis of cardiovascular disease. Traditional feature engineering methods based on expert knowledge lack multidimensional and multi-view information abstraction and data representation ability, so the traditional research on pattern recognition of arrhythmia detection cannot achieve satisfactory results. Recently, with the increase of deep learning technology, automatic feature extraction of ECG data based on deep neural networks has been widely discussed. In order to utilize the complementary strength between different schemes, in this paper, we propose an arrhythmia detection method based on the multi-resolution representation (MRR) of ECG signals. This method utilizes four different up to date deep neural networks as four channel models for ECG vector representations learning. The deep learning based representations, together with hand-crafted features of ECG, forms the MRR, which is the input of the downstream classification strategy. The experimental results of big ECG dataset multi-label classification confirm that the F1 score of the proposed method is 0.9238, which is 1.31%, 0.62%, 1.18% and 0.6% higher than that of each channel model. From the perspective of architecture, this proposed method is highly scalable and can be employed as an example for arrhythmia recognition.


Author(s):  
Yongfeng Gao ◽  
Jiaxing Tan ◽  
Zhengrong Liang ◽  
Lihong Li ◽  
Yumei Huo

AbstractComputer aided detection (CADe) of pulmonary nodules plays an important role in assisting radiologists’ diagnosis and alleviating interpretation burden for lung cancer. Current CADe systems, aiming at simulating radiologists’ examination procedure, are built upon computer tomography (CT) images with feature extraction for detection and diagnosis. Human visual perception in CT image is reconstructed from sinogram, which is the original raw data acquired from CT scanner. In this work, different from the conventional image based CADe system, we propose a novel sinogram based CADe system in which the full projection information is used to explore additional effective features of nodules in the sinogram domain. Facing the challenges of limited research in this concept and unknown effective features in the sinogram domain, we design a new CADe system that utilizes the self-learning power of the convolutional neural network to learn and extract effective features from sinogram. The proposed system was validated on 208 patient cases from the publicly available online Lung Image Database Consortium database, with each case having at least one juxtapleural nodule annotation. Experimental results demonstrated that our proposed method obtained a value of 0.91 of the area under the curve (AUC) of receiver operating characteristic based on sinogram alone, comparing to 0.89 based on CT image alone. Moreover, a combination of sinogram and CT image could further improve the value of AUC to 0.92. This study indicates that pulmonary nodule detection in the sinogram domain is feasible with deep learning.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Isabella Castiglioni ◽  
Davide Ippolito ◽  
Matteo Interlenghi ◽  
Caterina Beatrice Monti ◽  
Christian Salvatore ◽  
...  

Abstract Background We aimed to train and test a deep learning classifier to support the diagnosis of coronavirus disease 2019 (COVID-19) using chest x-ray (CXR) on a cohort of subjects from two hospitals in Lombardy, Italy. Methods We used for training and validation an ensemble of ten convolutional neural networks (CNNs) with mainly bedside CXRs of 250 COVID-19 and 250 non-COVID-19 subjects from two hospitals (Centres 1 and 2). We then tested such system on bedside CXRs of an independent group of 110 patients (74 COVID-19, 36 non-COVID-19) from one of the two hospitals. A retrospective reading was performed by two radiologists in the absence of any clinical information, with the aim to differentiate COVID-19 from non-COVID-19 patients. Real-time polymerase chain reaction served as the reference standard. Results At 10-fold cross-validation, our deep learning model classified COVID-19 and non-COVID-19 patients with 0.78 sensitivity (95% confidence interval [CI] 0.74–0.81), 0.82 specificity (95% CI 0.78–0.85), and 0.89 area under the curve (AUC) (95% CI 0.86–0.91). For the independent dataset, deep learning showed 0.80 sensitivity (95% CI 0.72–0.86) (59/74), 0.81 specificity (29/36) (95% CI 0.73–0.87), and 0.81 AUC (95% CI 0.73–0.87). Radiologists’ reading obtained 0.63 sensitivity (95% CI 0.52–0.74) and 0.78 specificity (95% CI 0.61–0.90) in Centre 1 and 0.64 sensitivity (95% CI 0.52–0.74) and 0.86 specificity (95% CI 0.71–0.95) in Centre 2. Conclusions This preliminary experience based on ten CNNs trained on a limited training dataset shows an interesting potential of deep learning for COVID-19 diagnosis. Such tool is in training with new CXRs to further increase its performance.


2020 ◽  
pp. 000313482098255
Author(s):  
Michael D. Watson ◽  
Maria R. Baimas-George ◽  
Keith J. Murphy ◽  
Ryan C. Pickens ◽  
David A. Iannitti ◽  
...  

Background Neoadjuvant therapy may improve survival of patients with pancreatic adenocarcinoma; however, determining response to therapy is difficult. Artificial intelligence allows for novel analysis of images. We hypothesized that a deep learning model can predict tumor response to NAC. Methods Patients with pancreatic cancer receiving neoadjuvant therapy prior to pancreatoduodenectomy were identified between November 2009 and January 2018. The College of American Pathologists Tumor Regression Grades 0-2 were defined as pathologic response (PR) and grade 3 as no response (NR). Axial images from preoperative computed tomography scans were used to create a 5-layer convolutional neural network and LeNet deep learning model to predict PRs. The hybrid model incorporated decrease in carbohydrate antigen 19-9 (CA19-9) of 10%. Accuracy was determined by area under the curve. Results A total of 81 patients were included in the study. Patients were divided between PR (333 images) and NR (443 images). The pure model had an area under the curve (AUC) of .738 ( P < .001), whereas the hybrid model had an AUC of .785 ( P < .001). CA19-9 decrease alone was a poor predictor of response with an AUC of .564 ( P = .096). Conclusions A deep learning model can predict pathologic tumor response to neoadjuvant therapy for patients with pancreatic adenocarcinoma and the model is improved with the incorporation of decreases in serum CA19-9. Further model development is needed before clinical application.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Yuanyuan Xu ◽  
Genke Yang ◽  
Jiliang Luo ◽  
Jianan He

Electronic component recognition plays an important role in industrial production, electronic manufacturing, and testing. In order to address the problem of the low recognition recall and accuracy of traditional image recognition technologies (such as principal component analysis (PCA) and support vector machine (SVM)), this paper selects multiple deep learning networks for testing and optimizes the SqueezeNet network. The paper then presents an electronic component recognition algorithm based on the Faster SqueezeNet network. This structure can reduce the size of network parameters and computational complexity without deteriorating the performance of the network. The results show that the proposed algorithm performs well, where the Receiver Operating Characteristic Curve (ROC) and Area Under the Curve (AUC), capacitor and inductor, reach 1.0. When the FPR is less than or equal 10 − 6   level, the TPR is greater than or equal to 0.99; its reasoning time is about 2.67 ms, achieving the industrial application level in terms of time consumption and performance.


Biomolecules ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 815
Author(s):  
Shintaro Sukegawa ◽  
Kazumasa Yoshii ◽  
Takeshi Hara ◽  
Tamamo Matsuyama ◽  
Katsusuke Yamashita ◽  
...  

It is necessary to accurately identify dental implant brands and the stage of treatment to ensure efficient care. Thus, the purpose of this study was to use multi-task deep learning to investigate a classifier that categorizes implant brands and treatment stages from dental panoramic radiographic images. For objective labeling, 9767 dental implant images of 12 implant brands and treatment stages were obtained from the digital panoramic radiographs of patients who underwent procedures at Kagawa Prefectural Central Hospital, Japan, between 2005 and 2020. Five deep convolutional neural network (CNN) models (ResNet18, 34, 50, 101 and 152) were evaluated. The accuracy, precision, recall, specificity, F1 score, and area under the curve score were calculated for each CNN. We also compared the multi-task and single-task accuracies of brand classification and implant treatment stage classification. Our analysis revealed that the larger the number of parameters and the deeper the network, the better the performance for both classifications. Multi-tasking significantly improved brand classification on all performance indicators, except recall, and significantly improved all metrics in treatment phase classification. Using CNNs conferred high validity in the classification of dental implant brands and treatment stages. Furthermore, multi-task learning facilitated analysis accuracy.


2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi202-vi203
Author(s):  
Alvaro Sandino ◽  
Ruchika Verma ◽  
Yijiang Chen ◽  
David Becerra ◽  
Eduardo Romero ◽  
...  

Abstract PURPOSE Glioblastoma is a highly heterogeneous brain tumor. Primary treatment for glioblastoma involves maximally-safe surgical resection. After surgery, resected tissue slides are visually analyzed by neuro-pathologists to identify distinct histological hallmarks characterizing glioblastoma including high cellularity, necrosis, and vascular proliferation. In this work, we present a hierarchical deep learning-based strategy to automatically segment distinct Glioblastoma niches including necrosis, cellular tumor, and hyperplastic blood vessels, on digitized histopathology slides. METHODS We employed the IvyGap cohort for which Hematoxylin and eosin (H&E) slides (digitized at 20X magnification) from n=41 glioblastoma patients were available. Additionally, expert-driven segmentations of cellular tumor, necrosis, and hyperplastic blood vessels (along with other histological attributes) were made available. We randomly employed n=120 slides from 29 patients for training, n=38 slides from 6 cases for validation, and n=30 slides from 6 patients to feed our deep learning model based on Residual Network architecture (ResNet-50). ~2,000 patches of 224x224 pixels were sampled for every slide. Our hierarchical model included first segmenting necrosis from non-necrotic (i.e. cellular tumor) regions, and then from the regions segmented as non-necrotic, identifying hyperplastic blood-vessels from the rest of the cellular tumor. RESULTS Our model achieved a training accuracy of 94%, and a testing accuracy of 88% with an area under the curve (AUC) of 92% in distinguishing necrosis from non-necrotic (i.e. cellular tumor) regions. Similarly, we obtained a training accuracy of 78%, and a testing accuracy of 87% (with an AUC of 94%) in identifying hyperplastic blood vessels from the rest of the cellular tumor. CONCLUSION We developed a reliable hierarchical segmentation model for automatic segmentation of necrotic, cellular tumor, and hyperplastic blood vessels on digitized H&E-stained Glioblastoma tissue images. Future work will involve extension of our model for segmentation of pseudopalisading patterns and microvascular proliferation.


2021 ◽  
Vol 9 (Suppl 3) ◽  
pp. A874-A874
Author(s):  
David Soong ◽  
David Soong ◽  
David Soong ◽  
Anantharaman Muthuswamy ◽  
Clifton Drew ◽  
...  

BackgroundRecent advances in machine learning and digital pathology have enabled a variety of applications including predicting tumor grade and genetic subtypes, quantifying the tumor microenvironment (TME), and identifying prognostic morphological features from H&E whole slide images (WSI). These supervised deep learning models require large quantities of images manually annotated with cellular- and tissue-level details by pathologists, which limits scale and generalizability across cancer types and imaging platforms. Here we propose a semi-supervised deep learning framework that automatically annotates biologically relevant image content from hundreds of solid tumor WSI with minimal pathologist intervention, thus improving quality and speed of analytical workflows aimed at deriving clinically relevant features.MethodsThe dataset consisted of >200 H&E images across >10 solid tumor types (e.g. breast, lung, colorectal, cervical, and urothelial cancers) from advanced disease patients. WSI were first partitioned into small tiles of 128μm for feature extraction using a 50-layer convolutional neural network pre-trained on the ImageNet database. Dimensionality reduction and unsupervised clustering were applied to the resultant embeddings and image clusters were identified with enriched histological and morphological characteristics. A random subset of representative tiles (<0.5% of whole slide tissue areas) from these distinct image clusters was manually reviewed by pathologists and assigned to eight histological and morphological categories: tumor, stroma/connective tissue, necrotic cells, lymphocytes, red blood cells, white blood cells, normal tissue and glass/background. This dataset allowed the development of a multi-label deep neural network to segment morphologically distinct regions and detect/quantify histopathological features in WSI.ResultsAs representative image tiles within each image cluster were morphologically similar, expert pathologists were able to assign annotations to multiple images in parallel, effectively at 150 images/hour. Five-fold cross-validation showed average prediction accuracy of 0.93 [0.8–1.0] and area under the curve of 0.90 [0.8–1.0] over the eight image categories. As an extension of this classifier framework, all whole slide H&E images were segmented and composite lymphocyte, stromal, and necrotic content per patient tumor was derived and correlated with estimates by pathologists (p<0.05).ConclusionsA novel and scalable deep learning framework for annotating and learning H&E features from a large unlabeled WSI dataset across tumor types was developed. This automated approach accurately identified distinct histomorphological features, with significantly reduced labeling time and effort required for pathologists. Further, this classifier framework was extended to annotate regions enriched in lymphocytes, stromal, and necrotic cells – important TME contexture with clinical relevance for patient prognosis and treatment decisions.


2015 ◽  
Vol 2015 ◽  
pp. 1-6 ◽  
Author(s):  
Oh Jeong Kwon ◽  
Munsoo Kim ◽  
Ho Sub Lee ◽  
Kang-keyng Sung ◽  
Sangkwan Lee

It is important to reduce poststroke depression (PSD) to improve the stroke outcomes and quality of life in stroke patients, but the underlying mechanisms of PSD are not completely understood. As many studies implicate dysregulation of hypothalamic-pituitary-adrenal axis in the etiology of major depression and stroke, we compared the cortisol awakening response (CAR) of 28 admitted PSD patients with that of 23 age-matched caregiver controls. Saliva samples for cortisol measurement were collected immediately, 15, 30, and 45 min after awakening for two consecutive days. Depressive mood status in PSD patients was determined with Beck Depression Inventory and Hamilton Depression Rating Scale. Salivary cortisol levels of PSD patients did not rise significantly at any sampling time, showing a somewhat flat curve. Caregiver controls showed significantly higher CAR at 15 and 30 min after awakening compared to PSD patients even though the two groups did not differ at awakening or 45 min after awakening. Area-under-the-curve analysis revealed a significant negative correlation between the CAR and the degree of depression in PSD patients. Thus, our findings suggest that poststroke depression is closely related with dysfunctional HPA axis indicated by blunted CAR.


Sign in / Sign up

Export Citation Format

Share Document