Semi-Supervised Learning and Deep Neural Network on Detection of Roadway Cracking Using Unmanned Aerial System Imagery

2021 ◽  
Author(s):  
Long Ngo Hoang Truong ◽  
Edward Clay ◽  
Omar E. Mora ◽  
Wen Cheng ◽  
Maninder Kaur ◽  
...  
2018 ◽  
Author(s):  
Hiroyuki Fukuda ◽  
Kentaro Tomii

AbstractProtein contact prediction is a crucially important step for protein structure prediction. To predict a contact, approaches of two types are used: evolutionary coupling analysis (ECA) and supervised learning. ECA uses a large multiple sequence alignment (MSA) of homologue sequences and extract correlation information between residues. Supervised learning uses ECA analysis results as input features and can produce higher accuracy. As described herein, we present a new approach to contact prediction which can both extract correlation information and predict contacts in a supervised manner directly from MSA using a deep neural network (DNN). Using DNN, we can obtain higher accuracy than with earlier ECA methods. Simultaneously, we can weight each sequence in MSA to eliminate noise sequences automatically in a supervised way. It is expected that the combination of our method and other meta-learning methods can provide much higher accuracy of contact prediction.


2022 ◽  
Author(s):  
Jinxin Wei

<p>an auto-encoder which can be split into two parts is designed. The two parts can work well separately. The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. It is tested by tensorflow and mnist dataset. The abstract network is like LeNet-5. The concrete network is the inverse of the abstract network.Lossy compression can achieved by the test. The large compression ratio which is 19.6 is achieved. The decompression performance is ok through regression which treats classification as regression.</p>


2021 ◽  
Author(s):  
Jinxin Wei

<p>an auto-encoder which can be split into two parts is designed. The two parts can work well separately. The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. It is tested by tensorflow and mnist dataset. The abstract network is like LeNet-5. The concrete network is the inverse of the abstract network.Lossy compression can achieved by the test. The large compression ratio which is 19.6 is achieved. The decompression performance is ok through regression which treats classification as regression.</p>


Mathematics ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 298 ◽  
Author(s):  
Shenshen Gu ◽  
Yue Yang

The Max-cut problem is a well-known combinatorial optimization problem, which has many real-world applications. However, the problem has been proven to be non-deterministic polynomial-hard (NP-hard), which means that exact solution algorithms are not suitable for large-scale situations, as it is too time-consuming to obtain a solution. Therefore, designing heuristic algorithms is a promising but challenging direction to effectively solve large-scale Max-cut problems. For this reason, we propose a unique method which combines a pointer network and two deep learning strategies (supervised learning and reinforcement learning) in this paper, in order to address this challenge. A pointer network is a sequence-to-sequence deep neural network, which can extract data features in a purely data-driven way to discover the hidden laws behind data. Combining the characteristics of the Max-cut problem, we designed the input and output mechanisms of the pointer network model, and we used supervised learning and reinforcement learning to train the model to evaluate the model performance. Through experiments, we illustrated that our model can be well applied to solve large-scale Max-cut problems. Our experimental results also revealed that the new method will further encourage broader exploration of deep neural network for large-scale combinatorial optimization problems.


2020 ◽  
Vol 10 (16) ◽  
pp. 5640
Author(s):  
Jingyu Yao ◽  
Shengwu Qin ◽  
Shuangshuang Qiao ◽  
Wenchao Che ◽  
Yang Chen ◽  
...  

Accurate and timely landslide susceptibility mapping (LSM) is essential to effectively reduce the risk of landslide. In recent years, deep learning has been successfully applied to landslide susceptibility assessment due to the strong ability of fitting. However, in actual applications, the number of labeled samples is usually not sufficient for the training component. In this paper, a deep neural network model based on semi-supervised learning (SSL-DNN) for landslide susceptibility is proposed, which makes full use of a large number of spatial information (unlabeled data) with limited labeled data in the region to train the mode. Taking Jiaohe County in Jilin Province, China as an example, the landslide inventory from 2000 to 2017 was collected and 12 metrological, geographical, and human explanatory factors were compiled. Meanwhile, supervised models such as deep neural network (DNN), support vector machine (SVM), and logistic regression (LR) were implemented for comparison. Then, the landslide susceptibility was plotted and a series of evaluation tools such as class accuracy, predictive rate curves (AUC), and information gain ratio (IGR) were calculated to compare the prediction of models and factors. Experimental results indicate that the proposed SSL-DNN model (AUC = 0.898) outperformed all the comparison models. Therefore, semi-supervised deep learning could be considered as a potential approach for LSM.


2017 ◽  
Vol 48 (1) ◽  
pp. 375-388 ◽  
Author(s):  
Peiju Chang ◽  
Jiangshe Zhang ◽  
Junying Hu ◽  
Zengjie Song

2020 ◽  
Vol 7 (4) ◽  
pp. 727
Author(s):  
Larasati Larasati ◽  
Wisnu Ananta Kusuma ◽  
Annisa Annisa

<p class="Abstrak"><em>Drug repositioning</em> adalah penggunaan senyawa obat yang sudah lolos uji sebelumnya untuk mengatasi penyakit baru selain penyakit awal obat tersebut ditujukan. <em>Drug repositioning </em>dapat dilakukan dengan memprediksi interaksi senyawa obat dengan protein penyakit yang bereaksi positif. Salah satu tantangan dalam prediksi interaksi senyawa dan protein adalah masalah ketidakseimbangan data. <em>Deep semi-supervised learning </em>dapat menjadi alternatif untuk menangani model prediksi dengan data yang tidak seimbang. Proses <em>pre-training </em>berbasis <em>unsupervised learning</em> pada <em>deep semi-supervised learning </em>dapat merepresentasikan input dari <em>unlabeled data</em> (data mayoritas) dengan baik dan mengoptimasi inisialisasi bobot pada <em>classifier</em>. Penelitian ini mengimplementasikan <em>Deep Belief Network</em> (DBN) sebagai <em>pre-training</em> dan <em>Deep Neural Network</em> (DNN) sebagai <em>classifier</em>. Data yang digunakan pada penelitian ini adalah <em>dataset</em> ion channel, GPCR, dan nuclear receptor yang bersumber dari pangkalan data KEGG BRITE, BRENDA, SuperTarget, dan DrugBank. Hasil penelitian ini menunjukkan pada <em>dataset</em> tersebut, <em>pre-training</em> berupa ekstraksi fitur memberikan efek optimasi dilihat dari peningkatan performa model DNN pada akurasi (3-4.5%), AUC (4.5%), <em>precision</em><em> </em>(5.9-6%), dan F-measure (3.8%).</p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Drug repositioning is the reuse of an existing drug to treat a new disease other than its original medical indication. Drug repositioning can be done by predicting the interaction of drug compounds with disease proteins that react positively. One of the challenges in predicting the interaction of compounds and proteins is imbalanced data. Deep semi-supervised learning can be an alternative to handle prediction models with imbalanced data. The unsupervised learning based pre-training process in deep semi-supervised learning can represent input from unlabeled data (majority data) properly and optimize initialization of weights on the classifier. This study implements the Deep Belief Network (DBN) as a pre-training with Deep Neural Network (DNN) as a classifier. The data used in this study are ion channel, GPCR, and nuclear receptor dataset sourced from KEGG BRITE, BRENDA, SuperTarget, and DrugBank databases. The results of this study indicate that pre-training as feature extraction had an optimization effect. This can be seen from DNN performance improvement in accuracy (3-4.5%), AUC (4.5%), precision (5.9-6%), and F-measure (3.8%).<strong></strong></em></p><p class="Abstrak"><em><strong><br /></strong></em></p>


Deep learning has arrived with a great number of advances in the research of machine learning and its models. Due to the advancements recently in the field of deep learning and its models especially in the fields like NLP and Computer Vision in supervised learning for which we have to pre-definably decide a dataset and train our model completely on it and make predictions but in case if we have any new samples of data on which we want our model to be predicted then we have to completely retrain the model, which is computationally costly therefore to avoid re-training the model, we add the new samples on the previously learnt features from the pre- trained model called Incremental Learning. In the paper we proposed the system to overcome the process of catastrophic forgetting we introduced the concept of building on pre-trained model.


2021 ◽  
Author(s):  
Hoifung Poon ◽  
Hai Wang ◽  
Hunter Lang

Deep learning has proven effective for various application tasks, but its applicability is limited by the reliance on annotated examples. Self-supervised learning has emerged as a promising direction to alleviate the supervision bottleneck, but existing work focuses on leveraging co-occurrences in unlabeled data for task-agnostic representation learning, as exemplified by masked language model pretraining. In this chapter, we explore task-specific self-supervision, which leverages domain knowledge to automatically annotate noisy training examples for end applications, either by introducing labeling functions for annotating individual instances, or by imposing constraints over interdependent label decisions. We first present deep probabilistic logic (DPL), which offers a unifying framework for task-specific self-supervision by composing probabilistic logic with deep learning. DPL represents unknown labels as latent variables and incorporates diverse self-supervision using probabilistic logic to train a deep neural network end-to-end using variational EM. Next, we present self-supervised self-supervision (S4), which adds to DPL the capability to learn new self-supervision automatically. Starting from an initial seed self-supervision, S4 iteratively uses the deep neural network to propose new self supervision. These are either added directly (a form of structured self-training) or verified by a human expert (as in feature-based active learning). Experiments on real-world applications such as biomedical machine reading and various text classification tasks show that task-specific self-supervision can effectively leverage domain expertise and often match the accuracy of supervised methods with a tiny fraction of human effort.


2020 ◽  
Vol 10 (21) ◽  
pp. 7865
Author(s):  
Minjeong Kim ◽  
Daseon Hong ◽  
Sungsu Park

This paper proposes that the deep neural network-based guidance (DNNG) law replace the proportional navigation guidance (PNG) law. This approach is performed by adopting a supervised learning (SL) method using a large amount of simulation data from the missile system with PNG. Then, the proposed DNNG is compared with the PNG, and its performance is evaluated via the hitting rate and the energy function. In addition, the DNN-based only line-of-sight (LOS) rate input guidance (DNNLG) law, in which only the LOS rate is an input variable, is introduced and compared with the PN and DNNG laws. Then, the DNNG and DNNLG laws examine behavior in an initial position other than the training data.


Sign in / Sign up

Export Citation Format

Share Document