scholarly journals AutoGenome V2: New Multimodal Approach Developed for Multi-Omics Research

2020 ◽  
Author(s):  
Chi Xu ◽  
Denghui Liu ◽  
Lei Zhang ◽  
Zhimeng Xu ◽  
Wenjun He ◽  
...  

AbstractDeep learning is very promising in solving problems in omics research, such as genomics, epigenomics, proteomics, and metabolics. The design of neural network architecture is very important in modeling omics data against different scientific problems. Residual fully-connected neural network (RFCN) was proposed to provide better neural network architectures for modeling omics data. The next challenge for omics research is how to integrate informations from different omics data using deep learning, so that information from different molecular system levels could be combined to predict the target. In this paper, we present a novel multimodal approach that could efficiently integrate information from different omics data and achieve better accuracy than previous approaches. We evaluate our method in four different tasks: drug repositioning, target gene prediction, breast cancer subtyping and cancer type prediction, and all the four tasks achieved state of art performances. The multimodal approach is implemented in AutoGenome V2 and is also powered with all the previous AutoML convenience to facilitate biomedical researchers.

Cancers ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 2013
Author(s):  
Edian F. Franco ◽  
Pratip Rana ◽  
Aline Cruz ◽  
Víctor V. Calderón ◽  
Vasco Azevedo ◽  
...  

A heterogeneous disease such as cancer is activated through multiple pathways and different perturbations. Depending upon the activated pathway(s), the survival of the patients varies significantly and shows different efficacy to various drugs. Therefore, cancer subtype detection using genomics level data is a significant research problem. Subtype detection is often a complex problem, and in most cases, needs multi-omics data fusion to achieve accurate subtyping. Different data fusion and subtyping approaches have been proposed over the years, such as kernel-based fusion, matrix factorization, and deep learning autoencoders. In this paper, we compared the performance of different deep learning autoencoders for cancer subtype detection. We performed cancer subtype detection on four different cancer types from The Cancer Genome Atlas (TCGA) datasets using four autoencoder implementations. We also predicted the optimal number of subtypes in a cancer type using the silhouette score and found that the detected subtypes exhibit significant differences in survival profiles. Furthermore, we compared the effect of feature selection and similarity measures for subtype detection. For further evaluation, we used the Glioblastoma multiforme (GBM) dataset and identified the differentially expressed genes in each of the subtypes. The results obtained are consistent with other genomic studies and can be corroborated with the involved pathways and biological functions. Thus, it shows that the results from the autoencoders, obtained through the interaction of different datatypes of cancer, can be used for the prediction and characterization of patient subgroups and survival profiles.


2020 ◽  
Vol 6 (3) ◽  
pp. 501-504
Author(s):  
Dennis Schmidt ◽  
Andreas Rausch ◽  
Thomas Schanze

AbstractThe Institute of Virology at the Philipps-Universität Marburg is currently researching possible drugs to combat the Marburg virus. This involves classifying cell structures based on fluoroscopic microscopic image sequences. Conventionally, membranes of cells must be marked for better analysis, which is time consuming. In this work, an approach is presented to identify cell structures in images that are marked for subviral particles. It could be shown that there is a correlation between the distribution of subviral particles in an infected cell and the position of the cell’s structures. The segmentation is performed with a "Mask-R-CNN" algorithm, presented in this work. The model (a region-based convolutional neural network) is applied to enable a robust and fast recognition of cell structures. Furthermore, the network architecture is described. The proposed method is tested on data evaluated by experts. The results show a high potential and demonstrate that the method is suitable.


Neurosurgery ◽  
2020 ◽  
Vol 67 (Supplement_1) ◽  
Author(s):  
Syed M Adil ◽  
Lefko T Charalambous ◽  
Kelly R Murphy ◽  
Shervin Rahimpour ◽  
Stephen C Harward ◽  
...  

Abstract INTRODUCTION Opioid misuse persists as a public health crisis affecting approximately one in four Americans.1 Spinal cord stimulation (SCS) is a neuromodulation strategy to treat chronic pain, with one goal being decreased opioid consumption. Accurate prognostication about SCS success is key in optimizing surgical decision making for both physicians and patients. Deep learning, using neural network models such as the multilayer perceptron (MLP), enables accurate prediction of non-linear patterns and has widespread applications in healthcare. METHODS The IBM MarketScan® (IBM) database was queried for all patients ≥ 18 years old undergoing SCS from January 2010 to December 2015. Patients were categorized into opioid dose groups as follows: No Use, ≤ 20 morphine milligram equivalents (MME), 20–50 MME, 50–90 MME, and >90 MME. We defined “opiate weaning” as moving into a lower opioid dose group (or remaining in the No Use group) during the 12 months following permanent SCS implantation. After pre-processing, there were 62 predictors spanning demographics, comorbidities, and pain medication history. We compared an MLP with four hidden layers to the LR model with L1 regularization. Model performance was assessed using area under the receiver operating characteristic curve (AUC) with 5-fold nested cross-validation. RESULTS Ultimately, 6,124 patients were included, of which 77% had used opioids for >90 days within the 1-year pre-SCS and 72% had used >5 types of medications during the 90 days prior to SCS. The mean age was 56 ± 13 years old. Collectively, 2,037 (33%) patients experienced opiate weaning. The AUC was 0.74 for the MLP and 0.73 for the LR model. CONCLUSION To our knowledge, we present the first use of deep learning to predict opioid weaning after SCS. Model performance was slightly better than regularized LR. Future efforts should focus on optimization of neural network architecture and hyperparameters to further improve model performance. Models should also be calibrated and externally validated on an independent dataset. Ultimately, such tools may assist both physicians and patients in predicting opioid dose reduction after SCS.


2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2852
Author(s):  
Parvathaneni Naga Srinivasu ◽  
Jalluri Gnana SivaSai ◽  
Muhammad Fazal Ijaz ◽  
Akash Kumar Bhoi ◽  
Wonjoon Kim ◽  
...  

Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.


2020 ◽  
Vol 9 (1) ◽  
pp. 7-10
Author(s):  
Hendry Fonda

ABSTRACT Riau batik is known since the 18th century and is used by royal kings. Riau Batik is made by using a stamp that is mixed with coloring and then printed on fabric. The fabric used is usually silk. As its development, comparing Javanese  batik with riau batik Riau is very slowly accepted by the public. Convolutional Neural Networks (CNN) is a combination of artificial neural networks and deeplearning methods. CNN consists of one or more convolutional layers, often with a subsampling layer followed by one or more fully connected layers as a standard neural network. In the process, CNN will conduct training and testing of Riau batik so that a collection of batik models that have been classified based on the characteristics that exist in Riau batik can be determined so that images are Riau batik and non-Riau batik. Classification using CNN produces Riau batik and not Riau batik with an accuracy of 65%. Accuracy of 65% is due to basically many of the same motifs between batik and other batik with the difference lies in the color of the absorption in the batik riau. Kata kunci: Batik; Batik Riau; CNN; Image; Deep Learning   ABSTRAK   Batik Riau dikenal sejak abad ke 18 dan digunakan oleh bangsawan raja. Batik Riau dibuat dengan menggunakan cap yang dicampur dengan pewarna kemudian dicetak di kain. Kain yang digunakan biasanya sutra. Seiring perkembangannya, dibandingkan batik Jawa maka batik Riau sangat lambat diterima oleh masyarakat. Convolutional Neural Networks (CNN) merupakan kombinasi dari jaringan syaraf tiruan dan metode deeplearning. CNN terdiri dari satu atau lebih lapisan konvolutional, seringnya dengan suatu lapisan subsampling yang diikuti oleh satu atau lebih lapisan yang terhubung penuh sebagai standar jaringan syaraf. Dalam prosesnya CNN akan melakukan training dan testing terhadap batik Riau sehingga didapat kumpulan model batik yang telah terklasi    fikasi berdasarkan ciri khas yang ada pada batik Riau sehingga dapat ditentukan gambar (image) yang merupakan batik Riau dan yang bukan merupakan batik Riau. Klasifikasi menggunakan CNN menghasilkan batik riau dan bukan batik riau dengan akurasi 65%. Akurasi 65% disebabkan pada dasarnya banyak motif yang sama antara batik riau dengan batik lainnya dengan perbedaan terletak pada warna cerap pada batik riau. Kata kunci: Batik; Batik Riau; CNN; Image; Deep Learning


Author(s):  
Ranganath Singari ◽  
Karun Singla ◽  
Gangesh Chawla

Deep learning has offered new avenues in the field of industrial management. Traditional methods of quality inspection such as Acceptance Sampling relies on a probabilistic measure derived from inspecting a sample of finished products. Evaluating a fixed number of products to derive the quality level for the complete batch is not a robust approach. Visual inspection solutions based on deep learning can be employed in the large manufacturing units to improve the quality inspection units for steel surface defect detection. This leads to optimization of the human capital due to reduction in manual intervention and turnaround time in the overall supply chain of the industry. Consequently, the sample size in the Acceptance sampling can be increased with minimal effort vis-à-vis an increase in the overall accuracy of the inspection. The learning curve of this work is supported by Convolutional Neural Network which has been used to extract feature representations from grayscale images to classify theinputs into six types of surface defects. The neural network architecture is compiled in Keras framework using Tensorflow backend with state of the art Adam RMS Prop with Nesterov Momentum (NADAM) optimizer. The proposed classification algorithm holds the potential to identify the dominant flaws in the manufacturing system responsible for leaking costs.


2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Zohreh Gholami Doborjeh ◽  
Nikola Kasabov ◽  
Maryam Gholami Doborjeh ◽  
Alexander Sumich

Author(s):  
Nishanth Krishnaraj ◽  
A. Mary Mekala ◽  
Bhaskar M. ◽  
Ruban Nersisson ◽  
Alex Noel Joseph Raj

Early prediction of cancer type has become very crucial. Breast cancer is common to women and it leads to life threatening. Several imaging techniques have been suggested for timely detection and treatment of breast cancer. More research findings have been done to accurately detect the breast cancer. Automated whole breast ultrasound (AWBUS) is a new breast imaging technology that can render the entire breast anatomy in 3-D volume. The tissue layers in the breast are segmented and the type of lesion in the breast tissue can be identified which is essential for cancer detection. In this chapter, a u-net convolutional neural network architecture is used to implement the segmentation of breast tissues from AWBUS images into the different layers, that is, epidermis, subcutaneous, and muscular layer. The architecture was trained and tested with the AWBUS dataset images. The performance of the proposed scheme was based on accuracy, loss and the F1 score of the neural network that was calculated for each layer of the breast tissue.


Sign in / Sign up

Export Citation Format

Share Document