scholarly journals Designing Forecasting Parameter Algorithm of Environmental Shrimp Using Recurrent Neural Network

Author(s):  
Phat Huu Nguyen ◽  
Quynh Diem Duong ◽  
Minh Van Luong ◽  
Hoang Duc Chu

With the strong development of science and technology, the study of technologies related to environmental forecasting is important. In recent years, the application of smart technology in aquaculture has been widely applied. Based on the requirement, we focus on predicting the environmental parameters applied in shrimp farming, especially white shrimp, one of the seafood grown in our country. In the paper, we exploit a small branch of identification problem. This paper proposes an algorithmic construction method to predict changes in shrimp farm environmental parameters and simulate the next parameters based on current parameters. The goal of the paper is to reduce the parameter of Recurrent Neural Network (RNN) while ensuring data accuracy. Experimental results show that the proposal algorithm improves up to 85 percent when selecting suitable learning factor of neural networks.

Author(s):  
E. Yu. Shchetinin

The recognition of human emotions is one of the most relevant and dynamically developing areas of modern speech technologies, and the recognition of emotions in speech (RER) is the most demanded part of them. In this paper, we propose a computer model of emotion recognition based on an ensemble of bidirectional recurrent neural network with LSTM memory cell and deep convolutional neural network ResNet18. In this paper, computer studies of the RAVDESS database containing emotional speech of a person are carried out. RAVDESS-a data set containing 7356 files. Entries contain the following emotions: 0 – neutral, 1 – calm, 2 – happiness, 3 – sadness, 4 – anger, 5 – fear, 6 – disgust, 7 – surprise. In total, the database contains 16 classes (8 emotions divided into male and female) for a total of 1440 samples (speech only). To train machine learning algorithms and deep neural networks to recognize emotions, existing audio recordings must be pre-processed in such a way as to extract the main characteristic features of certain emotions. This was done using Mel-frequency cepstral coefficients, chroma coefficients, as well as the characteristics of the frequency spectrum of audio recordings. In this paper, computer studies of various models of neural networks for emotion recognition are carried out on the example of the data described above. In addition, machine learning algorithms were used for comparative analysis. Thus, the following models were trained during the experiments: logistic regression (LR), classifier based on the support vector machine (SVM), decision tree (DT), random forest (RF), gradient boosting over trees – XGBoost, convolutional neural network CNN, recurrent neural network RNN (ResNet18), as well as an ensemble of convolutional and recurrent networks Stacked CNN-RNN. The results show that neural networks showed much higher accuracy in recognizing and classifying emotions than the machine learning algorithms used. Of the three neural network models presented, the CNN + BLSTM ensemble showed higher accuracy.


2017 ◽  
Author(s):  
Michelle J Wu ◽  
Johan OL Andreasson ◽  
Wipapat Kladwang ◽  
William J Greenleaf ◽  
Rhiju Das ◽  
...  

AbstractRNA is a functionally versatile molecule that plays key roles in genetic regulation and in emerging technologies to control biological processes. Computational models of RNA secondary structure are well-developed but often fall short in making quantitative predictions of the behavior of multi-RNA complexes. Recently, large datasets characterizing hundreds of thousands of individual RNA complexes have emerged as rich sources of information about RNA energetics. Meanwhile, advances in machine learning have enabled the training of complex neural networks from large datasets. Here, we assess whether a recurrent neural network model, Ribonet, can learn from high-throughput binding data, using simulation and experimental studies to test model accuracy but also determine if they learned meaningful information about the biophysics of RNA folding. We began by evaluating the model on energetic values predicted by the Turner model to assess whether the neural network could learn a representation that recovered known biophysical principles. First, we trained Ribonet to predict the simulated free energy of an RNA in complex with multiple input RNAs. Our model accurately predicts free energies of new sequences but also shows evidence of having learned base pairing information, as assessed by in silico double mutant analysis. Next, we extended this model to predict the simulated affinity between an arbitrary RNA sequence and a reporter RNA. While these more indirect measurements precluded the learning of basic principles of RNA biophysics, the resulting model achieved sub-kcal/mol accuracy and enabled design of simple RNA input responsive riboswitches with high activation ratios predicted by the Turner model from which the training data were generated. Finally, we compiled and trained on an experimental dataset comprising over 600,000 experimental affinity measurements published on the Eterna open laboratory. Though our tests revealed that the model likely did not learn a physically realistic representation of RNA interactions, it nevertheless achieved good performance of 0.76 kcal/mol on test sets with the application of transfer learning and novel sequence-specific data augmentation strategies. These results suggest that recurrent neural network architectures, despite being naïve to the physics of RNA folding, have the potential to capture complex biophysical information. However, more diverse datasets, ideally involving more direct free energy measurements, may be necessary to train de novo predictive models that are consistent with the fundamentals of RNA biophysics.Author SummaryThe precise design of RNA interactions is essential to gaining greater control over RNA-based biotechnology tools, including designer riboswitches and CRISPR-Cas9 gene editing. However, the classic model for energetics governing these interactions fails to quantitatively predict the behavior of RNA molecules. We developed a recurrent neural network model, Ribonet, to quantitatively predict these values from sequence alone. Using simulated data, we show that this model is able to learn simple base pairing rules, despite having no a priori knowledge about RNA folding encoded in the network architecture. This model also enables design of new switching RNAs that are predicted to be effective by the “ground truth” simulated model. We applied transfer learning to retrain Ribonet using hundreds of thousands of RNA-RNA affinity measurements and demonstrate simple data augmentation techniques that improve model performance. At the same time, data diversity currently available set limits on Ribonet’s accuracy. Recurrent neural networks are a promising tool for modeling nucleic acid biophysics and may enable design of complex RNAs for novel applications.


2008 ◽  
Vol 20 (3) ◽  
pp. 844-872 ◽  
Author(s):  
Youshen Xia ◽  
Mohamed S. Kamel

The constrained L1 estimation is an attractive alternative to both the unconstrained L1 estimation and the least square estimation. In this letter, we propose a cooperative recurrent neural network (CRNN) for solving L1 estimation problems with general linear constraints. The proposed CRNN model combines four individual neural network models automatically and is suitable for parallel implementation. As a special case, the proposed CRNN includes two existing neural networks for solving unconstrained and constrained L1 estimation problems, respectively. Unlike existing neural networks, with penalty parameters, for solving the constrained L1 estimation problem, the proposed CRNN is guaranteed to converge globally to the exact optimal solution without any additional condition. Compared with conventional numerical algorithms, the proposed CRNN has a low computational complexity and can deal with the L1 estimation problem with degeneracy. Several applied examples show that the proposed CRNN can obtain more accurate estimates than several existing algorithms.


2022 ◽  
Vol 14 (4) ◽  
pp. 5-12
Author(s):  
Ol'ga Ermilina ◽  
Elena Aksenova ◽  
Anatoliy Semenov

The paper provides formalization and construction of a model of the process of electrical discharge machining. When describing the process, a T-shaped equivalent circuit containing an RLC circuit was used. Determine the transfer function of the proposed substitution scheme. Also, a task is formulated and an algorithm for neural network parametric identification of a T-shaped equivalent circuit is proposed. The problem is posed and an algorithm is developed for neural network parametric identification of the equivalent circuit with a computational experiment, the formation of training samples on its basis, and the subsequent training of dynamic and static neural networks used in the identification problem. The process was simulated in Simulink, Matlab package. Acceptable coincidence of the calculated data with the experimental ones showed that the proposed model of electrical discharge machining reflects real electromagnetic processes occurring in the interelectrode gap.


2020 ◽  
Author(s):  
Rahil Sarikhani ◽  
Farshid Keynia

Abstract Cognitive Radio (CR) network was introduced as a promising approach in utilizing spectrum holes. Spectrum sensing is the first stage of this utilization which could be improved using cooperation, namely Cooperative Spectrum Sensing (CSS), where some Secondary Users (SUs) collaborate to detect the existence of the Primary User (PU). In this paper, to improve the accuracy of detection Deep Learning (DL) is used. In order to make it more practical, Recurrent Neural Network (RNN) is used since there are some memory in the channel and the state of the PUs in the network. Hence, the proposed RNN is compared with the Convolutional Neural Network (CNN), and it represents useful advantages to the contrast one, which is demonstrated by simulation.


2019 ◽  
Vol 5 (12) ◽  
pp. eaay6946 ◽  
Author(s):  
Tyler W. Hughes ◽  
Ian A. D. Williamson ◽  
Momchil Minkov ◽  
Shanhui Fan

Analog machine learning hardware platforms promise to be faster and more energy efficient than their digital counterparts. Wave physics, as found in acoustics and optics, is a natural candidate for building analog processors for time-varying signals. Here, we identify a mapping between the dynamics of wave physics and the computation in recurrent neural networks. This mapping indicates that physical wave systems can be trained to learn complex features in temporal data, using standard training techniques for neural networks. As a demonstration, we show that an inverse-designed inhomogeneous medium can perform vowel classification on raw audio signals as their waveforms scatter and propagate through it, achieving performance comparable to a standard digital implementation of a recurrent neural network. These findings pave the way for a new class of analog machine learning platforms, capable of fast and efficient processing of information in its native domain.


2020 ◽  
Vol 14 ◽  
Author(s):  
Luis Arturo Soriano ◽  
Erik Zamora ◽  
J. M. Vazquez-Nicolas ◽  
Gerardo Hernández ◽  
José Antonio Barraza Madrigal ◽  
...  

A Proportional Integral Derivative (PID) controller is commonly used to carry out tasks like position tracking in the industrial robot manipulator controller; however, over time, the PID integral gain generates degradation within the controller, which then produces reduced stability and bandwidth. A proportional derivative (PD) controller has been proposed to deal with the increase in integral gain but is limited if gravity is not compensated for. In practice, the dynamic system non-linearities frequently are unknown or hard to obtain. Adaptive controllers are online schemes that are used to deal with systems that present non-linear and uncertainties dynamics. Adaptive controller use measured data of system trajectory in order to learn and compensate the uncertainties and external disturbances. However, these techniques can adopt more efficient learning methods in order to improve their performance. In this work, a nominal control law is used to achieve a sub-optimal performance, and a scheme based on a cascade neural network is implemented to act as a non-linear compensation whose task is to improve upon the performance of the nominal controller. The main contributions of this work are neural compensation based on a cascade neural networks and the function to update the weights of neural network used. The algorithm is implemented using radial basis function neural networks and a recompense function that leads longer traces for an identification problem. A two-degree-of-freedom robot manipulator is proposed to validate the proposed scheme and compare it with conventional PD control compensation.


2019 ◽  
Vol 9 (16) ◽  
pp. 3391 ◽  
Author(s):  
Santiago Pascual ◽  
Joan Serrà ◽  
Antonio Bonafonte

Conversion from text to speech relies on the accurate mapping from linguistic to acoustic symbol sequences, for which current practice employs recurrent statistical models such as recurrent neural networks. Despite the good performance of such models (in terms of low distortion in the generated speech), their recursive structure with intermediate affine transformations tends to make them slow to train and to sample from. In this work, we explore two different mechanisms that enhance the operational efficiency of recurrent neural networks, and study their performance–speed trade-off. The first mechanism is based on the quasi-recurrent neural network, where expensive affine transformations are removed from temporal connections and placed only on feed-forward computational directions. The second mechanism includes a module based on the transformer decoder network, designed without recurrent connections but emulating them with attention and positioning codes. Our results show that the proposed decoder networks are competitive in terms of distortion when compared to a recurrent baseline, whilst being significantly faster in terms of CPU and GPU inference time. The best performing model is the one based on the quasi-recurrent mechanism, reaching the same level of naturalness as the recurrent neural network based model with a speedup of 11.2 on CPU and 3.3 on GPU.


2009 ◽  
Vol 21 (11) ◽  
pp. 3214-3227
Author(s):  
James Ting-Ho Lo

By a fundamental neural filtering theorem, a recurrent neural network with fixed weights is known to be capable of adapting to an uncertain environment. This letter reports some mathematical results on the performance of such adaptation for series-parallel identification of a dynamical system as compared with the performance of the best series-parallel identifier possible under the assumption that the precise value of the uncertain environmental process is given. In short, if an uncertain environmental process is observable (not necessarily constant) from the output of a dynamical system or constant (not necessarily observable), then a recurrent neural network exists as a series-parallel identifier of the dynamical system whose output approaches the output of an optimal series-parallel identifier using the environmental process as an additional input.


Sign in / Sign up

Export Citation Format

Share Document