scholarly journals Advances in photonic reservoir computing

Nanophotonics ◽  
2017 ◽  
Vol 6 (3) ◽  
pp. 561-576 ◽  
Author(s):  
Guy Van der Sande ◽  
Daniel Brunner ◽  
Miguel C. Soriano

AbstractWe review a novel paradigm that has emerged in analogue neuromorphic optical computing. The goal is to implement a reservoir computer in optics, where information is encoded in the intensity and phase of the optical field. Reservoir computing is a bio-inspired approach especially suited for processing time-dependent information. The reservoir’s complex and high-dimensional transient response to the input signal is capable of universal computation. The reservoir does not need to be trained, which makes it very well suited for optics. As such, much of the promise of photonic reservoirs lies in their minimal hardware requirements, a tremendous advantage over other hardware-intensive neural network models. We review the two main approaches to optical reservoir computing: networks implemented with multiple discrete optical nodes and the continuous system of a single nonlinear device coupled to delayed feedback.

2020 ◽  
Vol 25 (1) ◽  
pp. 7 ◽  
Author(s):  
Abdel-Rahman Hedar ◽  
Wael Deabes ◽  
Majid Almaraashi ◽  
Hesham H. Amin

Enhancing Evolutionary Algorithms (EAs) using mathematical elements significantly contribute to their development and control the randomness they are experiencing. Moreover, the automation of the primary process steps of EAs is still one of the hardest problems. Specifically, EAs still have no robust automatic termination criteria. Moreover, the highly random behavior of some evolutionary operations should be controlled, and the methods should invoke advanced learning process and elements. As follows, this research focuses on the problem of automating and controlling the search process of EAs by using sensing and mathematical mechanisms. These mechanisms can provide the search process with the needed memories and conditions to adapt to the diversification and intensification opportunities. Moreover, a new quadratic coding and quadratic search operator are invoked to increase the local search improving possibilities. The suggested quadratic search operator uses both regression and Radial Basis Function (RBF) neural network models. Two evolutionary-based methods are proposed to evaluate the performance of the suggested enhancing elements using genetic algorithms and evolution strategies. Results show that for both the regression, RBFs and quadratic techniques could help in the approximation of high-dimensional functions with the use of a few adjustable parameters for each type of function. Moreover, the automatic termination criteria could allow the search process to stop appropriately.


The process of assigning the weight to each connection is called training. A network can be subject to supervised or unsupervised training. In this chapter, supervised and unsupervised learning are explained and then various training algorithms such as multilayer perceptron (MLP) and Back Propagation (BP) as supervised training algorithms are introduced. The unsupervised training algorithm, namely Kohonen's self-organizing map (SOM), is introduced as one of most popular neural network models. SOMs convert high-dimensional, non-linear statistical relationships into simple geometric relationships in an n-dimensional array.


2021 ◽  
Author(s):  
Ben Sorscher ◽  
Surya Ganguli ◽  
Haim Sompolinsky

Understanding the neural basis of our remarkable cognitive capacity to accurately learn novel high-dimensional naturalistic concepts from just one or a few sensory experiences constitutes a fundamental problem. We propose a simple, biologically plausible, mathematically tractable, and computationally powerful neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts we can learn given few examples are defined by tightly circumscribed manifolds in the neural firing rate space of higher order sensory areas. We further posit that a single plastic downstream neuron can learn such concepts from few examples using a simple plasticity rule. We demonstrate the computational power of our simple proposal by showing it can achieve high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network models of these representations, and can even learn novel visual concepts specified only through language descriptions. Moreover, we develop a mathematical theory of few-shot learning that links neurophysiology to behavior by delineating several fundamental and measurable geometric properties of high-dimensional neural representations that can accurately predict the few-shot learning performance of naturalistic concepts across all our experiments. We discuss several implications of our theory for past and future studies in neuroscience, psychology and machine learning.


2018 ◽  
Vol 7 (3.15) ◽  
pp. 141 ◽  
Author(s):  
Nurbaity Sabri ◽  
Zalilah Abdul Aziz ◽  
Zaidah Ibrahim ◽  
Muhammad Akmal Rasydan Bin Mohd Rosni ◽  
Abdul Hafiz bin Abd Ghapul

This research compares the recognition performance between pre-trained models, GoogLeNet and AlexNet, with basic Convolution Neural Network (CNN) for leaf recognition. Lately, CNN has gained a lot of interest in image processing applications. Numerous pre-trained models have been introduced and the most popular pre-trained models are GoogLeNet and AlexNet. Each model has its own layers of convolution and computational complexity. A great success has been achieved using these classification models in computer vision and this research investigates their performances for leaf recognition using MalayaKew (MK), an open access leaf dataset. GoogLeNet achieves a perfect 100% accuracy, outperforms both AlexNet and basic CNN. On the other hand, the processing time for GoogLeNet is longer compared to the other models due to the high number of layers in its architecture.  


2020 ◽  
Vol 5 ◽  
pp. 140-147 ◽  
Author(s):  
T.N. Aleksandrova ◽  
◽  
E.K. Ushakov ◽  
A.V. Orlova ◽  
◽  
...  

The neural network models series used in the development of an aggregated digital twin of equipment as a cyber-physical system are presented. The twins of machining accuracy, chip formation and tool wear are examined in detail. On their basis, systems for stabilization of the chip formation process during cutting and diagnose of the cutting too wear are developed. Keywords cyberphysical system; neural network model of equipment; big data, digital twin of the chip formation; digital twin of the tool wear; digital twin of nanostructured coating choice


Author(s):  
Ann-Sophie Barwich

How much does stimulus input shape perception? The common-sense view is that our perceptions are representations of objects and their features and that the stimulus structures the perceptual object. The problem for this view concerns perceptual biases as responsible for distortions and the subjectivity of perceptual experience. These biases are increasingly studied as constitutive factors of brain processes in recent neuroscience. In neural network models the brain is said to cope with the plethora of sensory information by predicting stimulus regularities on the basis of previous experiences. Drawing on this development, this chapter analyses perceptions as processes. Looking at olfaction as a model system, it argues for the need to abandon a stimulus-centred perspective, where smells are thought of as stable percepts, computationally linked to external objects such as odorous molecules. Perception here is presented as a measure of changing signal ratios in an environment informed by expectancy effects from top-down processes.


Energies ◽  
2021 ◽  
Vol 14 (14) ◽  
pp. 4242
Author(s):  
Fausto Valencia ◽  
Hugo Arcos ◽  
Franklin Quilumba

The purpose of this research is the evaluation of artificial neural network models in the prediction of stresses in a 400 MVA power transformer winding conductor caused by the circulation of fault currents. The models were compared considering the training, validation, and test data errors’ behavior. Different combinations of hyperparameters were analyzed based on the variation of architectures, optimizers, and activation functions. The data for the process was created from finite element simulations performed in the FEMM software. The design of the Artificial Neural Network was performed using the Keras framework. As a result, a model with one hidden layer was the best suited architecture for the problem at hand, with the optimizer Adam and the activation function ReLU. The final Artificial Neural Network model predictions were compared with the Finite Element Method results, showing good agreement but with a much shorter solution time.


Sign in / Sign up

Export Citation Format

Share Document