Classification of remote sensed images using random forests and deep learning framework

Author(s):  
S. Piramanayagam ◽  
W. Schwartzkopf ◽  
F. W. Koehler ◽  
E. Saber
Author(s):  
Bethany K. Bracken ◽  
Shashank Manjunath ◽  
Stan German ◽  
Camille Monnier ◽  
Mike Farry

Current methods of assessing health are infrequent, costly, and require advanced medical equipment. 92% of US adults carry mobile phones, and 77% carry smartphones with advanced sensors (Smith, 2017). Smartphone apps are already being used to identify disease (e.g., skin cancer), but these apps require active participation by the user (e.g., uploading images). The goal of this research is to develop algorithms that enable continuous and real-time assessment of individuals by leveraging data that is passively and unobtrusively captured by cellphone sensors. Our first step to accomplish this is to identify the activity context in which the device is used as this affects the accuracy and reliability of sensor data for measuring and inferring a user’s health; data should be interpreted differently when the user is walking or running versus on a plane or bus. To do this, we use DeepSense, a deep learning approach to feature learning first developed by (Yao, Hu, Zhao, Zhang, & Abdelzaher, 2017). Here we present six experiments validating our model on: (1) a baseline implementation of DeepSense on the same data used by Yao et al., (2017) achieving a balanced accuracy (BA) of 95% over the six main contexts; (2) its ability to classify context using a different publically-available dataset (the ExtraSensory dataset) using the same 70/30 train/test split used by Vaizman et al. (2018), with a BA of 75%; (3) its ability to achieve improved classification when training on a single user, with a BA of 78%; (4) its ability to achieve accurate classification of a new user with a BA of 63%; (5) its improvement to 70% BA for new users when we considered phone placement to remove confounding information, and (6) its ability to accurately classify contexts over all 51 contexts collected by Vaizman et al, achieving a BA of 80% on 9 contexts, 75% on 12, and 70% on 17. We are now working to improve these results by adding other sensors available through smartphone data collection included in the ExtraSensory dataset (e.g., microphone). This will allow us to more accurately assess minor deviations in user behaviors that could indicate changes in health or injury status by accurately accounting for irrelevant, inaccurate, or misleading readings due to contextual effects that may confound interpretation.


2021 ◽  
Author(s):  
Nicolas Renaud ◽  
Cunliang Geng ◽  
Sonja Georgievska ◽  
Francesco Ambrosetti ◽  
Lars Ridder ◽  
...  

AbstractThree-dimensional (3D) structures of protein complexes provide fundamental information to decipher biological processes at the molecular scale. The vast amount of experimentally and computationally resolved protein-protein interfaces (PPIs) offers the possibility of training deep learning models to aid the predictions of their biological relevance.We present here DeepRank, a general, configurable deep learning framework for data mining PPIs using 3D convolutional neural networks (CNNs). DeepRank maps features of PPIs onto 3D grids and trains a user-specified CNN on these 3D grids. DeepRank allows for efficient training of 3D CNNs with data sets containing millions of PPIs and supports both classification and regression.We demonstrate the performance of DeepRank on two distinct challenges: The classification of biological versus crystallographic PPIs, and the ranking of docking models. For both problems DeepRank is competitive or outperforms state-of-the-art methods, demonstrating the versatility of the framework for research in structural biology.


Author(s):  
Ahmed Fares ◽  
Sheng-hua Zhong ◽  
Jianmin Jiang

Abstract Background As a physiological signal, EEG data cannot be subjectively changed or hidden. Compared with other physiological signals, EEG signals are directly related to human cortical activities with excellent temporal resolution. After the rapid development of machine learning and artificial intelligence, the analysis and calculation of EEGs has made great progress, leading to a significant boost in performances for content understanding and pattern recognition of brain activities across the areas of both neural science and computer vision. While such an enormous advance has attracted wide range of interests among relevant research communities, EEG-based classification of brain activities evoked by images still demands efforts for further improvement with respect to its accuracy, generalization, and interpretation, yet some characters of human brains have been relatively unexplored. Methods We propose a region-level stacked bi-directional deep learning framework for EEG-based image classification. Inspired by the hemispheric lateralization of human brains, we propose to extract additional information at regional level to strengthen and emphasize the differences between two hemispheres. The stacked bi-directional long short-term memories are used to capture the dynamic correlations hidden from both the past and the future to the current state in EEG sequences. Results Extensive experiments are carried out and our results demonstrate the effectiveness of our proposed framework. Compared with the existing state-of-the-arts, our framework achieves outstanding performances in EEG-based classification of brain activities evoked by images. In addition, we find that the signals of Gamma band are not only useful for achieving good performances for EEG-based image classification, but also play a significant role in capturing relationships between the neural activations and the specific emotional states. Conclusions Our proposed framework provides an improved solution for the problem that, given an image used to stimulate brain activities, we should be able to identify which class the stimuli image comes from by analyzing the EEG signals. The region-level information is extracted to preserve and emphasize the hemispheric lateralization for neural functions or cognitive processes of human brains. Further, stacked bi-directional LSTMs are used to capture the dynamic correlations hidden in EEG data. Extensive experiments on standard EEG-based image classification dataset validate that our framework outperforms the existing state-of-the-arts under various contexts and experimental setups.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
John-William Sidhom ◽  
H. Benjamin Larman ◽  
Drew M. Pardoll ◽  
Alexander S. Baras

AbstractDeep learning algorithms have been utilized to achieve enhanced performance in pattern-recognition tasks. The ability to learn complex patterns in data has tremendous implications in immunogenomics. T-cell receptor (TCR) sequencing assesses the diversity of the adaptive immune system and allows for modeling its sequence determinants of antigenicity. We present DeepTCR, a suite of unsupervised and supervised deep learning methods able to model highly complex TCR sequencing data by learning a joint representation of a TCR by its CDR3 sequences and V/D/J gene usage. We demonstrate the utility of deep learning to provide an improved ‘featurization’ of the TCR across multiple human and murine datasets, including improved classification of antigen-specific TCRs and extraction of antigen-specific TCRs from noisy single-cell RNA-Seq and T-cell culture-based assays. Our results highlight the flexibility and capacity for deep neural networks to extract meaningful information from complex immunogenomic data for both descriptive and predictive purposes.


Author(s):  
Yasir Eltigani Ali Mustaf ◽  
◽  
Bashir Hassan Ismail ◽  

Diagnosis of diabetic retinopathy (DR) via images of colour fundus requires experienced clinicians to determine the presence and importance of a large number of small characteristics. This work proposes and named Adapted Stacked Auto Encoder (ASAE-DNN) a novel deep learning framework for diabetic retinopathy (DR), three hidden layers have been used to extract features and classify them then use a Softmax classification. The models proposed are checked on Messidor's data set, including 800 training images and 150 test images. Exactness, accuracy, time, recall and calculation are assessed for the outcomes of the proposed models. The results of these studies show that the model ASAE-DNN was 97% accurate.


Sign in / Sign up

Export Citation Format

Share Document