scholarly journals Otomatisasi Sistem Bendung menggunakan Metode Backpropagation untuk Mengatur Debit Air berbasis Internet of Thing

2021 ◽  
Vol 3 (2) ◽  
pp. 73-86
Author(s):  
Ridwan Ridwan ◽  
Maulana Hakim Swistiawan ◽  
Susetyo Bagas Bhaskoro

Bendung merupakan alat pengendali dan pemantau seluruh tata pengaturan air dan juga sebagai antisipasi banjir. Bendung tersebut digunakan untuk mengatasi besarnya debit air yang berpotensi menciptakan banjir di suatu daerah. Banjir terjadi diakbitkan oleh curah hujan yang tinggi, dengan curah yang tinggi mengakibatkan ketinggian air pada bendung terus meningkat secara drastis. Rancangan dari gambaran umum dikembangkan menjadi dua bagian yaitu sistem pemantauan berbasis internet of  thing dengan menggunakan smartphone dan pengendalian debit dengan cara menggerakan pintu air. Untuk menggerakan pintu air diwujudkan menjadi sensor untuk membaca ketinggian air dan menghitung debit air yang keluar, kontroller sebagai pengendali utama untuk memisahkan data melakukan komputasi prediksi dan menjalankan algoritma pengontrolan bukaan pintu air. Arsitektur jaringan syaraf tiruan  5-8-1 yang dimana menggunakan hidden layer sebanyak delapan nodes dan satu hasil output. Metode backpropagation dapat mengklasifikasikan bukaan pintu air dengan nilai akurasi sebesar 91.78% pada proses training. Pada hasil testing menghasilkan nilai error sebesar 12.98%.

2019 ◽  
Vol 12 (3) ◽  
pp. 156-161 ◽  
Author(s):  
Aman Dureja ◽  
Payal Pahwa

Background: In making the deep neural network, activation functions play an important role. But the choice of activation functions also affects the network in term of optimization and to retrieve the better results. Several activation functions have been introduced in machine learning for many practical applications. But which activation function should use at hidden layer of deep neural networks was not identified. Objective: The primary objective of this analysis was to describe which activation function must be used at hidden layers for deep neural networks to solve complex non-linear problems. Methods: The configuration for this comparative model was used by using the datasets of 2 classes (Cat/Dog). The number of Convolutional layer used in this network was 3 and the pooling layer was also introduced after each layer of CNN layer. The total of the dataset was divided into the two parts. The first 8000 images were mainly used for training the network and the next 2000 images were used for testing the network. Results: The experimental comparison was done by analyzing the network by taking different activation functions on each layer of CNN network. The validation error and accuracy on Cat/Dog dataset were analyzed using activation functions (ReLU, Tanh, Selu, PRelu, Elu) at number of hidden layers. Overall the Relu gave best performance with the validation loss at 25th Epoch 0.3912 and validation accuracy at 25th Epoch 0.8320. Conclusion: It is found that a CNN model with ReLU hidden layers (3 hidden layers here) gives best results and improve overall performance better in term of accuracy and speed. These advantages of ReLU in CNN at number of hidden layers are helpful to effectively and fast retrieval of images from the databases.


2020 ◽  
Author(s):  
Dianbo Liu

BACKGROUND Applications of machine learning (ML) on health care can have a great impact on people’s lives. At the same time, medical data is usually big, requiring a significant amount of computational resources. Although it might not be a problem for wide-adoption of ML tools in developed nations, availability of computational resource can very well be limited in third-world nations and on mobile devices. This can prevent many people from benefiting of the advancement in ML applications for healthcare. OBJECTIVE In this paper we explored three methods to increase computational efficiency of either recurrent neural net-work(RNN) or feedforward (deep) neural network (DNN) while not compromising its accuracy. We used in-patient mortality prediction as our case analysis upon intensive care dataset. METHODS We reduced the size of RNN and DNN by applying pruning of “unused” neurons. Additionally, we modified the RNN structure by adding a hidden-layer to the RNN cell but reduce the total number of recurrent layers to accomplish a reduction of total parameters in the network. Finally, we implemented quantization on DNN—forcing the weights to be 8-bits instead of 32-bits. RESULTS We found that all methods increased implementation efficiency–including training speed, memory size and inference speed–without reducing the accuracy of mortality prediction. CONCLUSIONS This improvements allow the implementation of sophisticated NN algorithms on devices with lower computational resources.


Energies ◽  
2020 ◽  
Vol 13 (5) ◽  
pp. 1094 ◽  
Author(s):  
Lanjun Wan ◽  
Hongyang Li ◽  
Yiwei Chen ◽  
Changyun Li

To effectively predict the rolling bearing fault under different working conditions, a rolling bearing fault prediction method based on quantum particle swarm optimization (QPSO) backpropagation (BP) neural network and Dempster–Shafer evidence theory is proposed. First, the original vibration signals of rolling bearing are decomposed by three-layer wavelet packet, and the eigenvectors of different states of rolling bearing are constructed as input data of BP neural network. Second, the optimal number of hidden-layer nodes of BP neural network is automatically found by the dichotomy method to improve the efficiency of selecting the number of hidden-layer nodes. Third, the initial weights and thresholds of BP neural network are optimized by QPSO algorithm, which can improve the convergence speed and classification accuracy of BP neural network. Finally, the fault classification results of multiple QPSO-BP neural networks are fused by Dempster–Shafer evidence theory, and the final rolling bearing fault prediction model is obtained. The experiments demonstrate that different types of rolling bearing fault can be effectively and efficiently predicted under various working conditions.


Author(s):  
Volodymyr Shymkovych ◽  
Sergii Telenyk ◽  
Petro Kravets

AbstractThis article introduces a method for realizing the Gaussian activation function of radial-basis (RBF) neural networks with their hardware implementation on field-programmable gaits area (FPGAs). The results of modeling of the Gaussian function on FPGA chips of different families have been presented. RBF neural networks of various topologies have been synthesized and investigated. The hardware component implemented by this algorithm is an RBF neural network with four neurons of the latent layer and one neuron with a sigmoid activation function on an FPGA using 16-bit numbers with a fixed point, which took 1193 logic matrix gate (LUTs—LookUpTable). Each hidden layer neuron of the RBF network is designed on an FPGA as a separate computing unit. The speed as a total delay of the combination scheme of the block RBF network was 101.579 ns. The implementation of the Gaussian activation functions of the hidden layer of the RBF network occupies 106 LUTs, and the speed of the Gaussian activation functions is 29.33 ns. The absolute error is ± 0.005. The Spartan 3 family of chips for modeling has been used to get these results. Modeling on chips of other series has been also introduced in the article. RBF neural networks of various topologies have been synthesized and investigated. Hardware implementation of RBF neural networks with such speed allows them to be used in real-time control systems for high-speed objects.


2021 ◽  
Vol 14 (7) ◽  
pp. 308
Author(s):  
Usha Rekha Chinthapalli

In recent years, the attention of investors, practitioners and academics has grown in cryptocurrency. Initially, the cryptocurrency was designed as a viable digital currency implementation, and subsequently, numerous derivatives were produced in a range of sectors, including nonmonetary activities, financial transactions, and even capital management. The high volatility of exchange rates is one of the main features of cryptocurrencies. The article presents an interesting way to estimate the probability of cryptocurrency volatility clusters. In this regard, the paper explores exponential hybrid methodologies GARCH (or EGARCH) and through its portrayal as a financial asset, ANN models will provide analytical insight into bitcoin. Meanwhile, more scalable modelling is needed to fit financial variable characteristics such as ANN models because of the dynamic, nonlinear association structure between financial variables. For financial forecasting, BP is contained in the most popular methods of neural network training. The backpropagation method is employed to train the two models to determine which one performs the best in terms of predicting. This architecture consists of one hidden layer and one input layer with N neurons. Recent theoretical work on crypto-asset return behavior and risk management is supported by this research. In comparison with other traditional asset classes, these results give appropriate data on the behavior, allowing them to adopt the suitable investment decision. The study conclusions are based on a comparison between the dynamic features of cryptocurrencies and FOREX Currency’s traditional mass financial asset. Thus, the result illustrates how well the probability clusters show the impact on cryptocurrency and currencies. This research covers the sample period between August 2017 and August 2020, as cryptocurrency became popular around that period. The following methodology was implemented and simulated using Eviews and SPSS software. The performance evaluation of the cryptocurrencies is compared with FOREX currencies for better comparative study respectively.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 711
Author(s):  
Mina Basirat ◽  
Bernhard C. Geiger ◽  
Peter M. Roth

Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Taolan Zhao ◽  
Yan-Ming Chen ◽  
Yu Li ◽  
Jia Wang ◽  
Siyu Chen ◽  
...  

Abstract Background The folding of proteins is challenging in the highly crowded and sticky environment of a cell. Regulation of translation elongation may play a crucial role in ensuring the correct folding of proteins. Much of our knowledge regarding translation elongation comes from the sequencing of mRNA fragments protected by single ribosomes by ribo-seq. However, larger protected mRNA fragments have been observed, suggesting the existence of an alternative and previously hidden layer of regulation. Results In this study, we performed disome-seq to sequence mRNA fragments protected by two stacked ribosomes, a product of translational pauses during which the 5′-elongating ribosome collides with the 3′-paused one. We detected widespread ribosome collisions that are related to slow ribosome release when stop codons are at the A-site, slow peptide bond formation from proline, glycine, asparagine, and cysteine when they are at the P-site, and slow leaving of polylysine from the exit tunnel of ribosomes. The structure of disomes obtained by cryo-electron microscopy suggests a different conformation from the substrate of the ribosome-associated protein quality control pathway. Collisions occurred more frequently in the gap regions between α-helices, where a translational pause can prevent the folding interference from the downstream peptides. Paused or collided ribosomes are associated with specific chaperones, which can aid in the cotranslational folding of the nascent peptides. Conclusions Therefore, cells use regulated ribosome collisions to ensure protein homeostasis.


2021 ◽  
Vol 118 ◽  
pp. 48-55
Author(s):  
Stefano Marrone ◽  
Cristina Papa ◽  
Carlo Sansone
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document