scholarly journals Wave physics as an analog recurrent neural network

2019 ◽  
Vol 5 (12) ◽  
pp. eaay6946 ◽  
Author(s):  
Tyler W. Hughes ◽  
Ian A. D. Williamson ◽  
Momchil Minkov ◽  
Shanhui Fan

Analog machine learning hardware platforms promise to be faster and more energy efficient than their digital counterparts. Wave physics, as found in acoustics and optics, is a natural candidate for building analog processors for time-varying signals. Here, we identify a mapping between the dynamics of wave physics and the computation in recurrent neural networks. This mapping indicates that physical wave systems can be trained to learn complex features in temporal data, using standard training techniques for neural networks. As a demonstration, we show that an inverse-designed inhomogeneous medium can perform vowel classification on raw audio signals as their waveforms scatter and propagate through it, achieving performance comparable to a standard digital implementation of a recurrent neural network. These findings pave the way for a new class of analog machine learning platforms, capable of fast and efficient processing of information in its native domain.

Author(s):  
E. Yu. Shchetinin

The recognition of human emotions is one of the most relevant and dynamically developing areas of modern speech technologies, and the recognition of emotions in speech (RER) is the most demanded part of them. In this paper, we propose a computer model of emotion recognition based on an ensemble of bidirectional recurrent neural network with LSTM memory cell and deep convolutional neural network ResNet18. In this paper, computer studies of the RAVDESS database containing emotional speech of a person are carried out. RAVDESS-a data set containing 7356 files. Entries contain the following emotions: 0 – neutral, 1 – calm, 2 – happiness, 3 – sadness, 4 – anger, 5 – fear, 6 – disgust, 7 – surprise. In total, the database contains 16 classes (8 emotions divided into male and female) for a total of 1440 samples (speech only). To train machine learning algorithms and deep neural networks to recognize emotions, existing audio recordings must be pre-processed in such a way as to extract the main characteristic features of certain emotions. This was done using Mel-frequency cepstral coefficients, chroma coefficients, as well as the characteristics of the frequency spectrum of audio recordings. In this paper, computer studies of various models of neural networks for emotion recognition are carried out on the example of the data described above. In addition, machine learning algorithms were used for comparative analysis. Thus, the following models were trained during the experiments: logistic regression (LR), classifier based on the support vector machine (SVM), decision tree (DT), random forest (RF), gradient boosting over trees – XGBoost, convolutional neural network CNN, recurrent neural network RNN (ResNet18), as well as an ensemble of convolutional and recurrent networks Stacked CNN-RNN. The results show that neural networks showed much higher accuracy in recognizing and classifying emotions than the machine learning algorithms used. Of the three neural network models presented, the CNN + BLSTM ensemble showed higher accuracy.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Idris Kharroubi ◽  
Thomas Lim ◽  
Xavier Warin

AbstractWe study the approximation of backward stochastic differential equations (BSDEs for short) with a constraint on the gains process. We first discretize the constraint by applying a so-called facelift operator at times of a grid. We show that this discretely constrained BSDE converges to the continuously constrained one as the mesh grid converges to zero. We then focus on the approximation of the discretely constrained BSDE. For that we adopt a machine learning approach. We show that the facelift can be approximated by an optimization problem over a class of neural networks under constraints on the neural network and its derivative. We then derive an algorithm converging to the discretely constrained BSDE as the number of neurons goes to infinity. We end by numerical experiments.


2014 ◽  
Vol 10 (S306) ◽  
pp. 279-287 ◽  
Author(s):  
Michael Hobson ◽  
Philip Graff ◽  
Farhan Feroz ◽  
Anthony Lasenby

AbstractMachine-learning methods may be used to perform many tasks required in the analysis of astronomical data, including: data description and interpretation, pattern recognition, prediction, classification, compression, inference and many more. An intuitive and well-established approach to machine learning is the use of artificial neural networks (NNs), which consist of a group of interconnected nodes, each of which processes information that it receives and then passes this product on to other nodes via weighted connections. In particular, I discuss the first public release of the generic neural network training algorithm, calledSkyNet, and demonstrate its application to astronomical problems focusing on its use in the BAMBI package for accelerated Bayesian inference in cosmology, and the identification of gamma-ray bursters. TheSkyNetand BAMBI packages, which are fully parallelised using MPI, are available athttp://www.mrao.cam.ac.uk/software/.


2017 ◽  
Author(s):  
Michelle J Wu ◽  
Johan OL Andreasson ◽  
Wipapat Kladwang ◽  
William J Greenleaf ◽  
Rhiju Das ◽  
...  

AbstractRNA is a functionally versatile molecule that plays key roles in genetic regulation and in emerging technologies to control biological processes. Computational models of RNA secondary structure are well-developed but often fall short in making quantitative predictions of the behavior of multi-RNA complexes. Recently, large datasets characterizing hundreds of thousands of individual RNA complexes have emerged as rich sources of information about RNA energetics. Meanwhile, advances in machine learning have enabled the training of complex neural networks from large datasets. Here, we assess whether a recurrent neural network model, Ribonet, can learn from high-throughput binding data, using simulation and experimental studies to test model accuracy but also determine if they learned meaningful information about the biophysics of RNA folding. We began by evaluating the model on energetic values predicted by the Turner model to assess whether the neural network could learn a representation that recovered known biophysical principles. First, we trained Ribonet to predict the simulated free energy of an RNA in complex with multiple input RNAs. Our model accurately predicts free energies of new sequences but also shows evidence of having learned base pairing information, as assessed by in silico double mutant analysis. Next, we extended this model to predict the simulated affinity between an arbitrary RNA sequence and a reporter RNA. While these more indirect measurements precluded the learning of basic principles of RNA biophysics, the resulting model achieved sub-kcal/mol accuracy and enabled design of simple RNA input responsive riboswitches with high activation ratios predicted by the Turner model from which the training data were generated. Finally, we compiled and trained on an experimental dataset comprising over 600,000 experimental affinity measurements published on the Eterna open laboratory. Though our tests revealed that the model likely did not learn a physically realistic representation of RNA interactions, it nevertheless achieved good performance of 0.76 kcal/mol on test sets with the application of transfer learning and novel sequence-specific data augmentation strategies. These results suggest that recurrent neural network architectures, despite being naïve to the physics of RNA folding, have the potential to capture complex biophysical information. However, more diverse datasets, ideally involving more direct free energy measurements, may be necessary to train de novo predictive models that are consistent with the fundamentals of RNA biophysics.Author SummaryThe precise design of RNA interactions is essential to gaining greater control over RNA-based biotechnology tools, including designer riboswitches and CRISPR-Cas9 gene editing. However, the classic model for energetics governing these interactions fails to quantitatively predict the behavior of RNA molecules. We developed a recurrent neural network model, Ribonet, to quantitatively predict these values from sequence alone. Using simulated data, we show that this model is able to learn simple base pairing rules, despite having no a priori knowledge about RNA folding encoded in the network architecture. This model also enables design of new switching RNAs that are predicted to be effective by the “ground truth” simulated model. We applied transfer learning to retrain Ribonet using hundreds of thousands of RNA-RNA affinity measurements and demonstrate simple data augmentation techniques that improve model performance. At the same time, data diversity currently available set limits on Ribonet’s accuracy. Recurrent neural networks are a promising tool for modeling nucleic acid biophysics and may enable design of complex RNAs for novel applications.


2021 ◽  
Author(s):  
Wael Alnahari

Abstract In this paper, I proposed an iris recognition system by using deep learning via neural networks (CNN). Although CNN is used for machine learning, the recognition is achieved by building a non-trained CNN network with multiple layers. The main objective of the code the test pictures’ category (aka person name) with a high accuracy rate after having extracted enough features from training pictures of the same category which are obtained from a that I added to the code. I used IITD iris which included 10 iris pictures for 223 people.


Kursor ◽  
2020 ◽  
Vol 10 (4) ◽  
Author(s):  
Felisia Handayani ◽  
Metty Mustikasari

Sentiment analysis is computational research of the opinions of many people who are textually expressed against a particular topic. Twitter is the most popular communication tool among Internet users today to express their opinions. Deep Learning is a solution to allow computers to learn from experience and understand the world in terms of the hierarchy concept. Deep Learning objectives replace manual assignments with learning. The development of deep learning has a set of algorithms that focus on learning data representation. The recurrent Neural Network is one of the machine learning methods included in Deep learning because the data is processed through multi-players. RNN is also an algorithm that can recall the input with internal memory, therefore it is suitable for machine learning problems involving sequential data. The study aims to test models that have been created from tweets that are positive, negative, and neutral sentiment to determine the accuracy of the models. The models have been created using the Recurrent Neural Network when applied to tweet classifications to mark the individual classes of Indonesian-language tweet data sentiment. From the experiments conducted, results on the built system showed that the best test results in the tweet data with the RNN method using Confusion Matrix are with Precision 0.618, Recall 0.507 and Accuracy 0.722 on the data amounted to 3000 data and comparative data training and data testing of ratio data 80:20


2008 ◽  
Vol 20 (3) ◽  
pp. 844-872 ◽  
Author(s):  
Youshen Xia ◽  
Mohamed S. Kamel

The constrained L1 estimation is an attractive alternative to both the unconstrained L1 estimation and the least square estimation. In this letter, we propose a cooperative recurrent neural network (CRNN) for solving L1 estimation problems with general linear constraints. The proposed CRNN model combines four individual neural network models automatically and is suitable for parallel implementation. As a special case, the proposed CRNN includes two existing neural networks for solving unconstrained and constrained L1 estimation problems, respectively. Unlike existing neural networks, with penalty parameters, for solving the constrained L1 estimation problem, the proposed CRNN is guaranteed to converge globally to the exact optimal solution without any additional condition. Compared with conventional numerical algorithms, the proposed CRNN has a low computational complexity and can deal with the L1 estimation problem with degeneracy. Several applied examples show that the proposed CRNN can obtain more accurate estimates than several existing algorithms.


2021 ◽  
Author(s):  
Ruslan Chernyshev ◽  
Mikhail Krinitskiy ◽  
Viktor Stepanenko

<p>This work is devoted to development of neural networks for identification of partial differential equations (PDE) solved in the land surface scheme of INM RAS Earth System model (ESM). Atmospheric and climate models are in the top of the most demanding for supercomputing resources among research applications. Spatial resolution and a multitude of physical parameterizations used in ESMs continuously increase. Most of parameters are still poorly constrained, many of them cannot be measured directly. To optimize model calibration time, using neural networks looks a promising approach. Neural networks are already in wide use in satellite imaginary (Su Jeong Lee, et al, 2015; Krinitskiy M. et al, 2018) and for calibrating parameters of land surface models (Yohei Sawada el al, 2019). Neural networks have demonstrated high efficiency in solving conventional problems of mathematical physics (Lucie P. Aarts el al, 2001; Raissi M. et al, 2020). </p><p>We develop a neural networks for optimizing parameters of nonlinear soil heat and moisture transport equation set. For developing we used Python3 based programming tools implemented on GPUs and Ascend platform, provided by Huawei. Because of using hybrid approach combining neural network and classical thermodynamic equations, the major purpose was finding the way to correctly calculate backpropagation gradient of error function, because model trains and is being validated on the same temperature data, while model output is heat equation parameter, which is typically not known. Neural network model has been runtime trained using reference thermodynamic model calculation with prescribed parameters, every next thermodynamic model step has been used for fitting the neural network until it reaches the loss function tolerance.</p><p>Literature:</p><p>1.     Aarts, L.P., van der Veer, P. “Neural Network Method for Solving Partial Differential Equations”. Neural Processing Letters 14, 261–271 (2001). https://doi.org/10.1023/A:1012784129883</p><p>2.     Raissi, M., P. Perdikaris and G. Karniadakis. “Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations.” ArXiv abs/1711.10561 (2017): n. pag.</p><p>3.     Lee, S.J., Ahn, MH. & Lee, Y. Application of an artificial neural network for a direct estimation of atmospheric instability from a next-generation imager. Adv. Atmos. Sci. 33, 221–232 (2016). https://doi.org/10.1007/s00376-015-5084-9</p><p>4.     Krinitskiy M, Verezemskaya P, Grashchenkov K, Tilinina N, Gulev S, Lazzara M. Deep Convolutional Neural Networks Capabilities for Binary Classification of Polar Mesocyclones in Satellite Mosaics. Atmosphere. 2018; 9(11):426.</p><p>5.     Sawada, Y.. “Machine learning accelerates parameter optimization and uncertainty assessment of a land surface model.” ArXiv abs/1909.04196 (2019): n. pag.</p><p>6.     Shufen Pan et al. Evaluation of global terrestrial evapotranspiration using state-of-the-art approaches in remote sensing, machine learning and land surface modeling. Hydrol. Earth Syst. Sci., 24, 1485–1509 (2020)</p><p>7.     Chaney, Nathaniel & Herman, Jonathan & Ek, M. & Wood, Eric. (2016). Deriving Global Parameter Estimates for the Noah Land Surface Model using FLUXNET and Machine Learning: Improving Noah LSM Parameters. Journal of Geophysical Research: Atmospheres. 121. 10.1002/2016JD024821.</p><p> </p><p> </p>


2020 ◽  
Author(s):  
Rahil Sarikhani ◽  
Farshid Keynia

Abstract Cognitive Radio (CR) network was introduced as a promising approach in utilizing spectrum holes. Spectrum sensing is the first stage of this utilization which could be improved using cooperation, namely Cooperative Spectrum Sensing (CSS), where some Secondary Users (SUs) collaborate to detect the existence of the Primary User (PU). In this paper, to improve the accuracy of detection Deep Learning (DL) is used. In order to make it more practical, Recurrent Neural Network (RNN) is used since there are some memory in the channel and the state of the PUs in the network. Hence, the proposed RNN is compared with the Convolutional Neural Network (CNN), and it represents useful advantages to the contrast one, which is demonstrated by simulation.


Sign in / Sign up

Export Citation Format

Share Document