scholarly journals A Functionally Separate Autoencoder

Author(s):  
Jinxin Wei

<p><b>According to kids’ learning process, an auto</b><b>-</b><b>encoder</b><b> is designed</b><b> which can be split into two parts. The two parts can work well separately.The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. The network can achieve its intended functionality through testing by mnist dataset and convolution neural network.</b><b> R</b><b>ound function</b><b> is added between the abstract network and concrete network in order</b><b> to get the the representative generation of class.</b><b> T</b><b>he generation ability </b><b> can be increased </b><b>by adding jump connection and negative feedback. At last, the characteristics of </b><b>the</b><b> network</b><b> is discussed</b><b>. </b><b>T</b><b>he input can </b><b>be </b><b>change</b><b>d </b><b>to any form by encoder and then change it back by decoder through inverse function. The concrete network can be seen as the memory stored by the parameters.</b><b> </b><b>Lethe is that when new knowledge input,</b><b> </b><b>the training process make</b><b>s</b><b> the parameter</b><b>s</b><b> change.</b><b></b></p>

2020 ◽  
Author(s):  
Jinxin Wei

<p><b>According to kids’ learning process, an auto</b><b>-</b><b>encoder</b><b> is designed</b><b> which can be split into two parts. The two parts can work well separately.The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. The network can achieve its intended functionality through testing by mnist dataset and convolution neural network.</b><b> R</b><b>ound function</b><b> is added between the abstract network and concrete network in order</b><b> to get the the representative generation of class.</b><b> T</b><b>he generation ability </b><b> can be increased </b><b>by adding jump connection and negative feedback. At last, the characteristics of </b><b>the</b><b> network</b><b> is discussed</b><b>. </b><b>T</b><b>he input can </b><b>be </b><b>change</b><b>d </b><b>to any form by encoder and then change it back by decoder through inverse function. The concrete network can be seen as the memory stored by the parameters.</b><b> </b><b>Lethe is that when new knowledge input,</b><b> </b><b>the training process make</b><b>s</b><b> the parameter</b><b>s</b><b> change.</b><b></b></p>


2021 ◽  
Author(s):  
Jinxin Wei

<p><b>According to kids’ learning process, an auto</b><b>-</b><b>encoder</b><b> is designed</b><b> which can be split into two parts. The two parts can work well separately.The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. The network can achieve its intended functionality through testing by mnist dataset and convolution neural network.</b><b> R</b><b>ound function</b><b> is added between the abstract network and concrete network in order</b><b> to get the the representative generation of class.</b><b> T</b><b>he generation ability </b><b> can be increased </b><b>by adding jump connection and negative feedback. At last, the characteristics of </b><b>the</b><b> network</b><b> is discussed</b><b>. </b><b>T</b><b>he input can </b><b>be </b><b>change</b><b>d </b><b>to any form by encoder and then change it back by decoder through inverse function. The concrete network can be seen as the memory stored by the parameters.</b><b> </b><b>Lethe is that when new knowledge input,</b><b> </b><b>the training process make</b><b>s</b><b> the parameter</b><b>s</b><b> change.</b><b></b></p>


2021 ◽  
Author(s):  
Jinxin Wei

<p><b>According to kids’ learning process, an auto</b><b>-</b><b>encoder</b><b> is designed</b><b> which can be split into two parts. The two parts can work well separately.The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. The network can achieve its intended functionality through testing by mnist dataset and convolution neural network.</b><b> R</b><b>ound function</b><b> is added between the abstract network and concrete network in order</b><b> to get the the representative generation of class.</b><b> T</b><b>he generation ability </b><b> can be increased </b><b>by adding jump connection and negative feedback. At last, the characteristics of </b><b>the</b><b> network</b><b> is discussed</b><b>. </b><b>T</b><b>he input can </b><b>be </b><b>change</b><b>d </b><b>to any form by encoder and then change it back by decoder through inverse function. The concrete network can be seen as the memory stored by the parameters.</b><b> </b><b>Lethe is that when new knowledge input,</b><b> </b><b>the training process make</b><b>s</b><b> the parameter</b><b>s</b><b> change.</b><b></b></p>


2021 ◽  
Author(s):  
Jinxin Wei

According to kids’ learning process, an auto-encoder which can be split into two parts is designed. The two parts can work well separately. The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. The network can achieve its intended functionality through testing by mnist dataset and convolution neural network. Round function is added between the abstract network and concrete network in order to get the representative generation of class. The generation ability can be increased by adding jump connection and negative feedback. At last, the characteristics of the network is discussed. The input can be changed to any form by encoder and then change it back by decoder through inverse function. The concrete network can be seen as the memory stored by the parameters. Lethe is that when new knowledge input, the training process makes the parameters change. At last, the application of the network is discussed. The network can be used for logic generation through deep reinforcement learning. The network can also be used for language translation, zip and unzip, encryption and decryption, compile and decompile, modulation and demodulation.<br>


2021 ◽  
Vol 248 ◽  
pp. 01012
Author(s):  
Anton Starodub ◽  
Natalia Eliseeva ◽  
Milen Georgiev

The research conducted in this paper is in the field of machine learning. The main object of the research is the learning process of an artificial neural network in order to increase its efficiency. The algorithm based on the analysis of retrospective learning data. The dynamics of changes in the values of the weights of an artificial neural network during training is an important indicator of training efficiency. The algorithm proposed in this work is based on changing the weight gradients values. Changing of the gradients weights makes it possible to understand how actively the network weights change during training. This knowledge helps to diagnose the training process and makes an adjusting the training parameters. The results of the algorithm can be used to train an artificial neural network. The network will help to determine the set of measures (actions) needed to optimize the learning process by the algorithm results.


2022 ◽  
Author(s):  
Jinxin Wei

<p>an auto-encoder which can be split into two parts is designed. The two parts can work well separately. The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. It is tested by tensorflow and mnist dataset. The abstract network is like LeNet-5. The concrete network is the inverse of the abstract network.Lossy compression can achieved by the test. The large compression ratio which is 19.6 is achieved. The decompression performance is ok through regression which treats classification as regression.</p>


2021 ◽  
Author(s):  
Jinxin Wei

<p>an auto-encoder which can be split into two parts is designed. The two parts can work well separately. The top half is an abstract network which is trained by supervised learning and can be used to classify and regress. The bottom half is a concrete network which is accomplished by inverse function and trained by self-supervised learning. It can generate the input of abstract network from concept or label. It is tested by tensorflow and mnist dataset. The abstract network is like LeNet-5. The concrete network is the inverse of the abstract network.Lossy compression can achieved by the test. The large compression ratio which is 19.6 is achieved. The decompression performance is ok through regression which treats classification as regression.</p>


Author(s):  
Nanang Kasim ◽  
Gibran Satya Nugraha

Bahasa Arab adalah bahasa yang dijumpai pada kitab suci agama Islam yaitu berupa Al-Qur’an. Belajar bahasa Arab dengan mengenali bentuk hurufnya merupakan metode yang sangat efektif. Pengenalan pola tulisan tangan aksara Arab merupakan salah satu penelitan yang pernah dilakukan sebelumnya, dimana hasil akurasi yang di dapatkan bervariasi sesuai dengan metode penelitian yang digunakan. Pengenalan pola aksara Arab memiliki banyak tantangan salah satu berbedanya gaya tulisan tangan dan karakter tulisan setiap orang. Penelitian ini bertujuan untuk membangun model machine learning dan mengetahui akurasi yang dihasilkan dari pengenalan pola tulisan tangan aksara Arab menggunakan metode convolution neural network, serta memperbaiki beberapa kekurangan pada penelitian pengenalan pola aksara Arab menggunakan metode CNN yang pernah dilakukan sebelumnya. Convolution neural network merupakan metode klasifikasi dengan memberikan label pada saat melakukan pembelajaran atau tergolong ke dalam supervised learning. Data yang digunakan untuk penelitian ini merupakan data yang bersumber dari tulisan tangan di kertas HVS A4 menggunakan spidol dengan dua kategori yaitu usia 5 sampai 20 tahun dan usia 20 tahun ke atas baik yang sudah pernah belajar aksara Arab maupun belum pernah belajar aksara Arab guna didapatkannya variasi tulisan tangan.


2021 ◽  
Author(s):  
Chenshuai Bai ◽  
Kaijun Wu ◽  
Dicong Wang ◽  
Hong Li ◽  
Mingjun Yan ◽  
...  

Abstract Because the detection effect of EfficientNet-YOLOv3 target detection algorithm is not very good, this paper proposes a small target detection research based on dynamic convolution neural network. Firstly, the dynamic convolutional neural network is introduced to replace the traditional convolutional neural network, which makes the algorithm model more robust; Secondly, in the training process, the optimization parameters are continuously adjusted to further strengthen the model structure; Finally, in order to prevent over fitting, the Learning Rate and Batch Size parameters are modified during the training process. remote sensing image The results of the proposed algorithm on RSOD remote sensing image data sets show that compared with the original EfficientNet-YOLOv3 algorithm, the (Average Precision, AP) value is increased by 1.93% and the (Log Average Miss Rate ,LAMR) value is reduced by 0.0500; The results of the proposed algorithm on TGRS-HRRSD remote sensing image data set show that compared with the original EfficientNet-YOLOv3 algorithm, the mAP value is increased by 0.07% and the mLAMR value is reduced by 0.0007.


Now there are several methods for retrieving images. TBIR, CBIR and SBIR (Semantic Image Retrieval) are some significant methods among them. We propose in this article an effective CNN tool for image retrieval based on eigenvalues. This work is the expansion as a cyber-forensic tool of our newly suggested CNN-based SBIR scheme. Eigenvalues play a prominent role in apps for image retrieval. Eigenvalues are useful in the measurement and segmentation of an image's sharpness and compression process. In this research we used PCA algorithm to generate eigenvalues with corresponding images from an input image. The generated eigenvalues with corresponding images are trained by AlexNet (A pre-trained deep layer convolution neural network (CNN)). After the training process eigenvalues are given as input to the AlexNet (CNN Tool) and the corresponding images are retrieved based on eigenvalues. We noted that output images based on their eigenvalues are obtained with an outstanding 96.44 percent accuracy due to AlexNet training


Sign in / Sign up

Export Citation Format

Share Document