Speed-up of learning in second order neural networks and its application to model synthesis of electrical devices

Author(s):  
J. Wilk ◽  
E. Wilk ◽  
B. Morgenstern
Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1159
Author(s):  
Shyam Sundar Santra ◽  
Omar Bazighifan ◽  
Mihai Postolache

In continuous applications in electrodynamics, neural networks, quantum mechanics, electromagnetism, and the field of time symmetric, fluid dynamics, neutral differential equations appear when modeling many problems and phenomena. Therefore, it is interesting to study the qualitative behavior of solutions of such equations. In this study, we obtained some new sufficient conditions for oscillations to the solutions of a second-order delay differential equations with sub-linear neutral terms. The results obtained improve and complement the relevant results in the literature. Finally, we show an example to validate the main results, and an open problem is included.


2015 ◽  
Vol 164 ◽  
pp. 252-261 ◽  
Author(s):  
Wu Yang ◽  
Yan-Wu Wang ◽  
Zhi-Gang Zeng ◽  
Ding-Fu Zheng

2020 ◽  
Vol 32 (12) ◽  
pp. 2557-2600
Author(s):  
Ruizhi Chen ◽  
Ling Li

Spiking neural networks (SNNs) with the event-driven manner of transmitting spikes consume ultra-low power on neuromorphic chips. However, training deep SNNs is still challenging compared to convolutional neural networks (CNNs). The SNN training algorithms have not achieved the same performance as CNNs. In this letter, we aim to understand the intrinsic limitations of SNN training to design better algorithms. First, the pros and cons of typical SNN training algorithms are analyzed. Then it is found that the spatiotemporal backpropagation algorithm (STBP) has potential in training deep SNNs due to its simplicity and fast convergence. Later, the main bottlenecks of the STBP algorithm are analyzed, and three conditions for training deep SNNs with the STBP algorithm are derived. By analyzing the connection between CNNs and SNNs, we propose a weight initialization algorithm to satisfy the three conditions. Moreover, we propose an error minimization method and a modified loss function to further improve the training performance. Experimental results show that the proposed method achieves 91.53% accuracy on the CIFAR10 data set with 1% accuracy increase over the STBP algorithm and decreases the training epochs on the MNIST data set to 15 epochs (over 13 times speed-up compared to the STBP algorithm). The proposed method also decreases classification latency by over 25 times compared to the CNN-SNN conversion algorithms. In addition, the proposed method works robustly for very deep SNNs, while the STBP algorithm fails in a 19-layer SNN.


2022 ◽  
Vol 6 (1) ◽  
Author(s):  
Marco Rossi ◽  
Sofia Vallecorsa

AbstractIn this work, we investigate different machine learning-based strategies for denoising raw simulation data from the ProtoDUNE experiment. The ProtoDUNE detector is hosted by CERN and it aims to test and calibrate the technologies for DUNE, a forthcoming experiment in neutrino physics. The reconstruction workchain consists of converting digital detector signals into physical high-level quantities. We address the first step in reconstruction, namely raw data denoising, leveraging deep learning algorithms. We design two architectures based on graph neural networks, aiming to enhance the receptive field of basic convolutional neural networks. We benchmark this approach against traditional algorithms implemented by the DUNE collaboration. We test the capabilities of graph neural network hardware accelerator setups to speed up training and inference processes.


2017 ◽  
Vol 109 (1) ◽  
pp. 29-38 ◽  
Author(s):  
Valentin Deyringer ◽  
Alexander Fraser ◽  
Helmut Schmid ◽  
Tsuyoshi Okita

Abstract Neural Networks are prevalent in todays NLP research. Despite their success for different tasks, training time is relatively long. We use Hogwild! to counteract this phenomenon and show that it is a suitable method to speed up training Neural Networks of different architectures and complexity. For POS tagging and translation we report considerable speedups of training, especially for the latter. We show that Hogwild! can be an important tool for training complex NLP architectures.


2019 ◽  
Vol 49 (1) ◽  
pp. 14-26 ◽  
Author(s):  
Honggui Han ◽  
Lu Zhang ◽  
Xiaolong Wu ◽  
Junfei Qiao

2021 ◽  
Vol 5 (2) ◽  
pp. 312-318
Author(s):  
Rima Dias Ramadhani ◽  
Afandi Nur Aziz Thohari ◽  
Condro Kartiko ◽  
Apri Junaidi ◽  
Tri Ginanjar Laksana ◽  
...  

Waste is goods / materials that have no value in the scope of production, where in some cases the waste is disposed of carelessly and can damage the environment. The Indonesian government in 2019 recorded waste reaching 66-67 million tons, which is higher than the previous year, which was 64 million tons. Waste is differentiated based on its type, namely organic and anorganic waste. In the field of computer science, the process of sensing the type waste can be done using a camera and the Convolutional Neural Networks (CNN) method, which is a type of neural network that works by receiving input in the form of images. The input will be trained using CNN architecture so that it will produce output that can recognize the object being inputted. This study optimizes the use of the CNN method to obtain accurate results in identifying types of waste. Optimization is done by adding several hyperparameters to the CNN architecture. By adding hyperparameters, the accuracy value is 91.2%. Meanwhile, if the hyperparameter is not used, the accuracy value is only 67.6%. There are three hyperparameters used to increase the accuracy value of the model. They are dropout, padding, and stride. 20% increase in dropout to increase training overfit. Whereas padding and stride are used to speed up the model training process.


Sign in / Sign up

Export Citation Format

Share Document