scholarly journals A hierarchical deep learning framework for the consistent classification of land use objects in geospatial databases

2021 ◽  
Vol 177 ◽  
pp. 38-56
Author(s):  
Chun Yang ◽  
Franz Rottensteiner ◽  
Christian Heipke
2016 ◽  
Author(s):  
S. Piramanayagam ◽  
W. Schwartzkopf ◽  
F. W. Koehler ◽  
E. Saber

Author(s):  
Bethany K. Bracken ◽  
Shashank Manjunath ◽  
Stan German ◽  
Camille Monnier ◽  
Mike Farry

Current methods of assessing health are infrequent, costly, and require advanced medical equipment. 92% of US adults carry mobile phones, and 77% carry smartphones with advanced sensors (Smith, 2017). Smartphone apps are already being used to identify disease (e.g., skin cancer), but these apps require active participation by the user (e.g., uploading images). The goal of this research is to develop algorithms that enable continuous and real-time assessment of individuals by leveraging data that is passively and unobtrusively captured by cellphone sensors. Our first step to accomplish this is to identify the activity context in which the device is used as this affects the accuracy and reliability of sensor data for measuring and inferring a user’s health; data should be interpreted differently when the user is walking or running versus on a plane or bus. To do this, we use DeepSense, a deep learning approach to feature learning first developed by (Yao, Hu, Zhao, Zhang, & Abdelzaher, 2017). Here we present six experiments validating our model on: (1) a baseline implementation of DeepSense on the same data used by Yao et al., (2017) achieving a balanced accuracy (BA) of 95% over the six main contexts; (2) its ability to classify context using a different publically-available dataset (the ExtraSensory dataset) using the same 70/30 train/test split used by Vaizman et al. (2018), with a BA of 75%; (3) its ability to achieve improved classification when training on a single user, with a BA of 78%; (4) its ability to achieve accurate classification of a new user with a BA of 63%; (5) its improvement to 70% BA for new users when we considered phone placement to remove confounding information, and (6) its ability to accurately classify contexts over all 51 contexts collected by Vaizman et al, achieving a BA of 80% on 9 contexts, 75% on 12, and 70% on 17. We are now working to improve these results by adding other sensors available through smartphone data collection included in the ExtraSensory dataset (e.g., microphone). This will allow us to more accurately assess minor deviations in user behaviors that could indicate changes in health or injury status by accurately accounting for irrelevant, inaccurate, or misleading readings due to contextual effects that may confound interpretation.


2019 ◽  
Vol 32 (12) ◽  
pp. 8529-8544
Author(s):  
Victor Alhassan ◽  
Christopher Henry ◽  
Sheela Ramanna ◽  
Christopher Storie

Author(s):  
C. Yang ◽  
F. Rottensteiner ◽  
C. Heipke

<p><strong>Abstract.</strong> Land use and land cover are two important variables in remote sensing. Commonly, the information of land use is stored in geospatial databases. In order to update such databases, we present a new approach to determine the land cover and to classify land use objects using convolutional neural networks (CNN). High-resolution aerial images and derived data such as digital surface models serve as input. An encoder-decoder based CNN is used for land cover classification. We found a composite including the infrared band and height data to outperform RGB images in land cover classification. We also propose a CNN-based methodology for the prediction of land use label from the geospatial databases, where we use masks representing object shape, the RGB images and the pixel-wise class scores of land cover as input. For this task, we developed a two-branch network where the first branch considers the whole area of an image, while the second branch focuses on a smaller relevant area. We evaluated our methods using two sites and achieved an overall accuracy of up to 89.6% and 81.7% for land cover and land use, respectively. We also tested our methods for land cover classification using the Vaihingen dataset of the ISPRS 2D semantic labelling challenge and achieved an overall accuracy of 90.7%.</p>


2021 ◽  
Author(s):  
Nicolas Renaud ◽  
Cunliang Geng ◽  
Sonja Georgievska ◽  
Francesco Ambrosetti ◽  
Lars Ridder ◽  
...  

AbstractThree-dimensional (3D) structures of protein complexes provide fundamental information to decipher biological processes at the molecular scale. The vast amount of experimentally and computationally resolved protein-protein interfaces (PPIs) offers the possibility of training deep learning models to aid the predictions of their biological relevance.We present here DeepRank, a general, configurable deep learning framework for data mining PPIs using 3D convolutional neural networks (CNNs). DeepRank maps features of PPIs onto 3D grids and trains a user-specified CNN on these 3D grids. DeepRank allows for efficient training of 3D CNNs with data sets containing millions of PPIs and supports both classification and regression.We demonstrate the performance of DeepRank on two distinct challenges: The classification of biological versus crystallographic PPIs, and the ranking of docking models. For both problems DeepRank is competitive or outperforms state-of-the-art methods, demonstrating the versatility of the framework for research in structural biology.


Sign in / Sign up

Export Citation Format

Share Document