scholarly journals Compression and Decompression which Use Deep Neural Network

Author(s):  
Jinxin Wei

<p>an auto-encoder which can be split into two parts is designed. The two parts can work well separately. The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. It is tested by tensorflow and mnist dataset. The abstract network is like LeNet-5. The concrete network is the inverse of the abstract network.Lossy compression can achieved by the test. The large compression ratio which is 19.6 is achieved. The decompression performance is ok through regression which treats classification as regression.</p>

2021 ◽  
Author(s):  
Jinxin Wei

<p>an auto-encoder which can be split into two parts is designed. The two parts can work well separately. The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. It is tested by tensorflow and mnist dataset. The abstract network is like LeNet-5. The concrete network is the inverse of the abstract network.Lossy compression can achieved by the test. The large compression ratio which is 19.6 is achieved. The decompression performance is ok through regression which treats classification as regression.</p>


2021 ◽  
Author(s):  
Jinxin Wei

<p>an auto-encoder which can be split into two parts is designed. The two parts can work well separately. The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. It is tested by tensorflow and mnist dataset. The abstract network is like LeNet-5. The concrete network is the inverse of the abstract network.Lossy compression can achieved by the test. The large compression ratio which is 19.6 is achieved. The decompression performance is ok through regression which treats classification as regression.</p>


2020 ◽  
Author(s):  
Jinxin Wei

<p><b>According to kids’ learning process, an auto</b><b>-</b><b>encoder</b><b> is designed</b><b> which can be split into two parts. The two parts can work well separately.The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. The network can achieve its intended functionality through testing by mnist dataset and convolution neural network.</b><b> R</b><b>ound function</b><b> is added between the abstract network and concrete network in order</b><b> to get the the representative generation of class.</b><b> T</b><b>he generation ability </b><b> can be increased </b><b>by adding jump connection and negative feedback. At last, the characteristics of </b><b>the</b><b> network</b><b> is discussed</b><b>. </b><b>T</b><b>he input can </b><b>be </b><b>change</b><b>d </b><b>to any form by encoder and then change it back by decoder through inverse function. The concrete network can be seen as the memory stored by the parameters.</b><b> </b><b>Lethe is that when new knowledge input,</b><b> </b><b>the training process make</b><b>s</b><b> the parameter</b><b>s</b><b> change.</b><b></b></p>


2021 ◽  
Author(s):  
Long Ngo Hoang Truong ◽  
Edward Clay ◽  
Omar E. Mora ◽  
Wen Cheng ◽  
Maninder Kaur ◽  
...  

2021 ◽  
Author(s):  
Jinxin Wei

According to kids’ learning process, an auto-encoder which can be split into two parts is designed. The two parts can work well separately. The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. The network can achieve its intended functionality through testing by mnist dataset and convolution neural network. Round function is added between the abstract network and concrete network in order to get the representative generation of class. The generation ability can be increased by adding jump connection and negative feedback. At last, the characteristics of the network is discussed. The input can be changed to any form by encoder and then change it back by decoder through inverse function. The concrete network can be seen as the memory stored by the parameters. Lethe is that when new knowledge input, the training process makes the parameters change. At last, the application of the network is discussed. The network can be used for logic generation through deep reinforcement learning. The network can also be used for language translation, zip and unzip, encryption and decryption, compile and decompile, modulation and demodulation.<br>


2021 ◽  
Author(s):  
Jinxin Wei

<p><b>According to kids’ learning process, an auto</b><b>-</b><b>encoder</b><b> is designed</b><b> which can be split into two parts. The two parts can work well separately.The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. The network can achieve its intended functionality through testing by mnist dataset and convolution neural network.</b><b> R</b><b>ound function</b><b> is added between the abstract network and concrete network in order</b><b> to get the the representative generation of class.</b><b> T</b><b>he generation ability </b><b> can be increased </b><b>by adding jump connection and negative feedback. At last, the characteristics of </b><b>the</b><b> network</b><b> is discussed</b><b>. </b><b>T</b><b>he input can </b><b>be </b><b>change</b><b>d </b><b>to any form by encoder and then change it back by decoder through inverse function. The concrete network can be seen as the memory stored by the parameters.</b><b> </b><b>Lethe is that when new knowledge input,</b><b> </b><b>the training process make</b><b>s</b><b> the parameter</b><b>s</b><b> change.</b><b></b></p>


2018 ◽  
Author(s):  
Hiroyuki Fukuda ◽  
Kentaro Tomii

AbstractProtein contact prediction is a crucially important step for protein structure prediction. To predict a contact, approaches of two types are used: evolutionary coupling analysis (ECA) and supervised learning. ECA uses a large multiple sequence alignment (MSA) of homologue sequences and extract correlation information between residues. Supervised learning uses ECA analysis results as input features and can produce higher accuracy. As described herein, we present a new approach to contact prediction which can both extract correlation information and predict contacts in a supervised manner directly from MSA using a deep neural network (DNN). Using DNN, we can obtain higher accuracy than with earlier ECA methods. Simultaneously, we can weight each sequence in MSA to eliminate noise sequences automatically in a supervised way. It is expected that the combination of our method and other meta-learning methods can provide much higher accuracy of contact prediction.


Mathematics ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 298 ◽  
Author(s):  
Shenshen Gu ◽  
Yue Yang

The Max-cut problem is a well-known combinatorial optimization problem, which has many real-world applications. However, the problem has been proven to be non-deterministic polynomial-hard (NP-hard), which means that exact solution algorithms are not suitable for large-scale situations, as it is too time-consuming to obtain a solution. Therefore, designing heuristic algorithms is a promising but challenging direction to effectively solve large-scale Max-cut problems. For this reason, we propose a unique method which combines a pointer network and two deep learning strategies (supervised learning and reinforcement learning) in this paper, in order to address this challenge. A pointer network is a sequence-to-sequence deep neural network, which can extract data features in a purely data-driven way to discover the hidden laws behind data. Combining the characteristics of the Max-cut problem, we designed the input and output mechanisms of the pointer network model, and we used supervised learning and reinforcement learning to train the model to evaluate the model performance. Through experiments, we illustrated that our model can be well applied to solve large-scale Max-cut problems. Our experimental results also revealed that the new method will further encourage broader exploration of deep neural network for large-scale combinatorial optimization problems.


2020 ◽  
Vol 10 (16) ◽  
pp. 5640
Author(s):  
Jingyu Yao ◽  
Shengwu Qin ◽  
Shuangshuang Qiao ◽  
Wenchao Che ◽  
Yang Chen ◽  
...  

Accurate and timely landslide susceptibility mapping (LSM) is essential to effectively reduce the risk of landslide. In recent years, deep learning has been successfully applied to landslide susceptibility assessment due to the strong ability of fitting. However, in actual applications, the number of labeled samples is usually not sufficient for the training component. In this paper, a deep neural network model based on semi-supervised learning (SSL-DNN) for landslide susceptibility is proposed, which makes full use of a large number of spatial information (unlabeled data) with limited labeled data in the region to train the mode. Taking Jiaohe County in Jilin Province, China as an example, the landslide inventory from 2000 to 2017 was collected and 12 metrological, geographical, and human explanatory factors were compiled. Meanwhile, supervised models such as deep neural network (DNN), support vector machine (SVM), and logistic regression (LR) were implemented for comparison. Then, the landslide susceptibility was plotted and a series of evaluation tools such as class accuracy, predictive rate curves (AUC), and information gain ratio (IGR) were calculated to compare the prediction of models and factors. Experimental results indicate that the proposed SSL-DNN model (AUC = 0.898) outperformed all the comparison models. Therefore, semi-supervised deep learning could be considered as a potential approach for LSM.


2017 ◽  
Vol 48 (1) ◽  
pp. 375-388 ◽  
Author(s):  
Peiju Chang ◽  
Jiangshe Zhang ◽  
Junying Hu ◽  
Zengjie Song

Sign in / Sign up

Export Citation Format

Share Document