Micro-climate Prediction - Multi Scale Encoder-decoder based Deep Learning Framework

Author(s):  
Peeyush Kumar ◽  
Ranveer Chandra ◽  
Chetan Bansal ◽  
Shivkumar Kalyanaraman ◽  
Tanuja Ganu ◽  
...  
2019 ◽  
Vol 57 (11) ◽  
pp. 9362-9377 ◽  
Author(s):  
Xiaoyan Lu ◽  
Yanfei Zhong ◽  
Zhuo Zheng ◽  
Yanfei Liu ◽  
Ji Zhao ◽  
...  

2020 ◽  
Vol 191 ◽  
pp. 105387
Author(s):  
Floris Heutink ◽  
Valentin Koch ◽  
Berit Verbist ◽  
Willem Jan van der Woude ◽  
Emmanuel Mylanus ◽  
...  

2021 ◽  
Vol 11 (16) ◽  
pp. 7731
Author(s):  
Rao Zeng ◽  
Minghong Liao

DNA methylation is one of the most extensive epigenetic modifications. DNA N6-methyladenine (6mA) plays a key role in many biology regulation processes. An accurate and reliable genome-wide identification of 6mA sites is crucial for systematically understanding its biological functions. Some machine learning tools can identify 6mA sites, but their limited prediction accuracy and lack of robustness limit their usability in epigenetic studies, which implies the great need of developing new computational methods for this problem. In this paper, we developed a novel computational predictor, namely the 6mAPred-MSFF, which is a deep learning framework based on a multi-scale feature fusion mechanism to identify 6mA sites across different species. In the predictor, we integrate the inverted residual block and multi-scale attention mechanism to build lightweight and deep neural networks. As compared to existing predictors using traditional machine learning, our deep learning framework needs no prior knowledge of 6mA or manually crafted sequence features and sufficiently capture better characteristics of 6mA sites. By benchmarking comparison, our deep learning method outperforms the state-of-the-art methods on the 5-fold cross-validation test on the seven datasets of six species, demonstrating that the proposed 6mAPred-MSFF is more effective and generic. Specifically, our proposed 6mAPred-MSFF gives the sensitivity and specificity of the 5-fold cross-validation on the 6mA-rice-Lv dataset as 97.88% and 94.64%, respectively. Our model trained with the rice data predicts well the 6mA sites of other five species: Arabidopsis thaliana, Fragaria vesca, Rosa chinensis, Homo sapiens, and Drosophila melanogaster with a prediction accuracy 98.51%, 93.02%, and 91.53%, respectively. Moreover, via experimental comparison, we explored performance impact by training and testing our proposed model under different encoding schemes and feature descriptors.


2019 ◽  
Vol 20 (S16) ◽  
Author(s):  
Min Zeng ◽  
Min Li ◽  
Fang-Xiang Wu ◽  
Yaohang Li ◽  
Yi Pan

Abstract Background Essential proteins are crucial for cellular life and thus, identification of essential proteins is an important topic and a challenging problem for researchers. Recently lots of computational approaches have been proposed to handle this problem. However, traditional centrality methods cannot fully represent the topological features of biological networks. In addition, identifying essential proteins is an imbalanced learning problem; but few current shallow machine learning-based methods are designed to handle the imbalanced characteristics. Results We develop DeepEP based on a deep learning framework that uses the node2vec technique, multi-scale convolutional neural networks and a sampling technique to identify essential proteins. In DeepEP, the node2vec technique is applied to automatically learn topological and semantic features for each protein in protein-protein interaction (PPI) network. Gene expression profiles are treated as images and multi-scale convolutional neural networks are applied to extract their patterns. In addition, DeepEP uses a sampling method to alleviate the imbalanced characteristics. The sampling method samples the same number of the majority and minority samples in a training epoch, which is not biased to any class in training process. The experimental results show that DeepEP outperforms traditional centrality methods. Moreover, DeepEP is better than shallow machine learning-based methods. Detailed analyses show that the dense vectors which are generated by node2vec technique contribute a lot to the improved performance. It is clear that the node2vec technique effectively captures the topological and semantic properties of PPI network. The sampling method also improves the performance of identifying essential proteins. Conclusion We demonstrate that DeepEP improves the prediction performance by integrating multiple deep learning techniques and a sampling method. DeepEP is more effective than existing methods.


2019 ◽  
Vol 19 (2) ◽  
pp. 424-442 ◽  
Author(s):  
Tian Guo ◽  
Lianping Wu ◽  
Cunjun Wang ◽  
Zili Xu

Extracting damage features precisely while overcoming the adverse interferences of measurement noise and incomplete data is a problem demanding prompt solution in structural health monitoring (SHM). In this article, we present a deep-learning-based method that can extract the damage features from mode shapes without utilizing any hand-engineered feature or prior knowledge. To meet various requirements of the damage scenarios, we use convolutional neural network (CNN) algorithm and design a new network architecture: a multi-scale module, which helps in extracting features at various scales that can reduce the interference of contaminated data; stacked residual learning modules, which help in accelerating the network convergence; and a global average pooling layer, which helps in reducing the consumption of computing resources and obtaining a regression performance. An extensive evaluation of the proposed method is conducted by using datasets based on numerical simulations, along with two datasets based on laboratory measurements. The transferring parameter methodology is introduced to reduce retraining requirement without any decreases in precision. Furthermore, we plot the feature vectors of each layer to discuss the damage features learned at these layers and additionally provide the basis for explaining the working principle of the neural network. The results show that our proposed method has accuracy improvements of at least 10% over other network architectures.


2021 ◽  
Vol 11 (11) ◽  
pp. 1397
Author(s):  
Bingxue Zhang ◽  
Yang Shi ◽  
Longfeng Hou ◽  
Zhong Yin ◽  
Chengliang Chai

Educational theory claims that integrating learning style into learning-related activities can improve academic performance. Traditional methods to recognize learning styles are mostly based on questionnaires and online behavior analyses. These methods are highly subjective and inaccurate in terms of recognition. Electroencephalography (EEG) signals have significant potential for use in the measurement of learning style. This study uses EEG signals to design a deep-learning-based model of recognition to recognize people’s learning styles with EEG features by using a non-overlapping sliding window, one-dimensional spatio-temporal convolutions, multi-scale feature extraction, global average pooling, and the group voting mechanism; this model is named the TSMG model (Temporal-Spatial-Multiscale-Global model). It solves the problem of processing EEG data of variable length, and improves the accuracy of recognition of the learning style by nearly 5% compared with prevalent methods, while reducing the cost of calculation by 41.93%. The proposed TSMG model can also recognize variable-length data in other fields. The authors also formulated a dataset of EEG signals (called the LSEEG dataset) containing features of the learning style processing dimension that can be used to test and compare models of recognition. This dataset is also conducive to the application and further development of EEG technology to recognize people’s learning styles.


2021 ◽  
Vol 7 (4) ◽  
pp. 67
Author(s):  
Lina Liu ◽  
Ying Y. Tsui ◽  
Mrinal Mandal

Skin lesion segmentation is a primary step for skin lesion analysis, which can benefit the subsequent classification task. It is a challenging task since the boundaries of pigment regions may be fuzzy and the entire lesion may share a similar color. Prevalent deep learning methods for skin lesion segmentation make predictions by ensembling different convolutional neural networks (CNN), aggregating multi-scale information, or by multi-task learning framework. The main purpose of doing so is trying to make use of as much information as possible so as to make robust predictions. A multi-task learning framework has been proved to be beneficial for the skin lesion segmentation task, which is usually incorporated with the skin lesion classification task. However, multi-task learning requires extra labeling information which may not be available for the skin lesion images. In this paper, a novel CNN architecture using auxiliary information is proposed. Edge prediction, as an auxiliary task, is performed simultaneously with the segmentation task. A cross-connection layer module is proposed, where the intermediate feature maps of each task are fed into the subblocks of the other task which can implicitly guide the neural network to focus on the boundary region of the segmentation task. In addition, a multi-scale feature aggregation module is proposed, which makes use of features of different scales and enhances the performance of the proposed method. Experimental results show that the proposed method obtains a better performance compared with the state-of-the-art methods with a Jaccard Index (JA) of 79.46, Accuracy (ACC) of 94.32, SEN of 88.76 with only one integrated model, which can be learned in an end-to-end manner.


Sign in / Sign up

Export Citation Format

Share Document