scholarly journals Evolutionary aspects of reservoir computing

2019 ◽  
Vol 374 (1774) ◽  
pp. 20180377 ◽  
Author(s):  
Luís F. Seoane

Reservoir computing (RC) is a powerful computational paradigm that allows high versatility with cheap learning. While other artificial intelligence approaches need exhaustive resources to specify their inner workings, RC is based on a reservoir with highly nonlinear dynamics that does not require a fine tuning of its parts. These dynamics project input signals into high-dimensional spaces, where training linear readouts to extract input features is vastly simplified. Thus, inexpensive learning provides very powerful tools for decision-making, controlling dynamical systems, classification, etc. RC also facilitates solving multiple tasks in parallel, resulting in a high throughput. Existing literature focuses on applications in artificial intelligence and neuroscience. We review this literature from an evolutionary perspective. RC’s versatility makes it a great candidate to solve outstanding problems in biology, which raises relevant questions. Is RC as abundant in nature as its advantages should imply? Has it evolved? Once evolved, can it be easily sustained? Under what circumstances? (In other words, is RC an evolutionarily stable computing paradigm?) To tackle these issues, we introduce a conceptual morphospace that would map computational selective pressures that could select for or against RC and other computing paradigms. This guides a speculative discussion about the questions above and allows us to propose a solid research line that brings together computation and evolution with RC as test model of the proposed hypotheses. This article is part of the theme issue ‘Liquid brains, solid brains: How distributed cognitive architectures process information’.

2018 ◽  
Vol 13 (13) ◽  
pp. 09
Author(s):  
Estevao Rada Oliveira ◽  
Fernando Juliani

Reservoir computing é um paradigma de rede neural recorrente construída de forma aleatória, onde sua camada intermediária não necessita ser treinada. O presente artigo sintetiza os principais conceitos, métodos e pesquisas recentes realizadas sobre o paradigma de reservoir computing, objetivando servir como apoio teórico para outros artigos. Foi realizada uma revisão bibliográfica fundamentada em bases de conhecimento científico confiáveis enfatizando pesquisas compreendidas no período de 2007 a 2017 e direcionadas à implementação e otimização do paradigma em questão. Como resultado do trabalho, tem-se a apresentação de trabalhos recentes que contribuem de forma geral para o desenvolvimento de reservoir computing, e devido à atualidade do tema, é apresentada uma diversidade de tópicos abertos à pesquisa, podendo servir como norteamento para a comunidade científica. Palavras-chave: Aprendizado de Máquina. Inteligência Artificial. Redes Neurais Recorrentes.Abstract Reservoir computng is a randomly constructed recurrent neural network paradigm, where the hidden layer does not need to be trained. This article summarizes the main concepts, methods and recent researches about reservoir computing paradigm, aiming to offer a theoretical support for other articles. Were made a bibliographic review based on reliable scientific knowledge bases, emphasizing researches published between 2007 and 2017 and focused on implementation and optimization of aforementioned paradigm. As a result, there's a report of recent articles that contribute in general to the development of reservoir computing, and due to its topicality, a diversity of topics that are still open to research are given, that may possibly work as a guide for the research community. Keywords: Artificial Intelligence. Machine Learning. Recurrent Neural Network.   


Energies ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 2338
Author(s):  
Sofia Agostinelli ◽  
Fabrizio Cumo ◽  
Giambattista Guidi ◽  
Claudio Tomazzoli

The research explores the potential of digital-twin-based methods and approaches aimed at achieving an intelligent optimization and automation system for energy management of a residential district through the use of three-dimensional data model integrated with Internet of Things, artificial intelligence and machine learning. The case study is focused on Rinascimento III in Rome, an area consisting of 16 eight-floor buildings with 216 apartment units powered by 70% of self-renewable energy. The combined use of integrated dynamic analysis algorithms has allowed the evaluation of different scenarios of energy efficiency intervention aimed at achieving a virtuous energy management of the complex, keeping the actual internal comfort and climate conditions. Meanwhile, the objective is also to plan and deploy a cost-effective IT (information technology) infrastructure able to provide reliable data using edge-computing paradigm. Therefore, the developed methodology led to the evaluation of the effectiveness and efficiency of integrative systems for renewable energy production from solar energy necessary to raise the threshold of self-produced energy, meeting the nZEB (near zero energy buildings) requirements.


Author(s):  
Pranav Gupta ◽  
Anita Williams Woolley

Human society faces increasingly complex problems that require coordinated collective action. Artificial intelligence (AI) holds the potential to bring together the knowledge and associated action needed to find solutions at scale. In order to unleash the potential of human and AI systems, we need to understand the core functions of collective intelligence. To this end, we describe a socio-cognitive architecture that conceptualizes how boundedly rational individuals coordinate their cognitive resources and diverse goals to accomplish joint action. Our transactive systems framework articulates the inter-member processes underlying the emergence of collective memory, attention, and reasoning, which are fundamental to intelligence in any system. Much like the cognitive architectures that have guided the development of artificial intelligence, our transactive systems framework holds the potential to be formalized in computational terms to deepen our understanding of collective intelligence and pinpoint roles that AI can play in enhancing it.


Author(s):  
Caroline Byrne ◽  
Michael O’Grady ◽  
Gregory O’Hare

Ambient intelligence (AmI) is a relatively new and distinct interpretation of the mobile computing paradigm. However, its recognition that embedded intelligence, either in actuality or perception, is an essential prerequisite if mobile computing is to realize its potential distinguishes it from other mobile usage paradigms. Though stressing the need for intelligence, and implicitly the adoption of artificial intelligence (AI) techniques, AmI does not formally ratify any particular approach and is thus technique agnostic. In this article, we examine the constituent technologies of AmI and provide a brief overview of some exemplary AmI projects. In particular, the question of intelligence is considered and some strategies for incorporating intelligence into AmI applications and services are proposed. It is the authors hope that a mature understanding of the issues involved will aid software professionals in the design and implementation of AmI applications.


2010 ◽  
Vol 2010 ◽  
pp. 1-8 ◽  
Author(s):  
Troy D. Kelley ◽  
Lyle N. Long

Generalized intelligence is much more difficult than originally anticipated when Artificial Intelligence (AI) was first introduced in the early 1960s. Deep Blue, the chess playing supercomputer, was developed to defeat the top rated human chess player and successfully did so by defeating Gary Kasporov in 1997. However, Deep Blue only played chess; it did not play checkers, or any other games. Other examples of AI programs which learned and played games were successful at specific tasks, but generalizing the learned behavior to other domains was not attempted. So the question remains: Why is generalized intelligence so difficult? If complex tasks require a significant amount of development, time and task generalization is not easily accomplished, then a significant amount of effort is going to be required to develop an intelligent system. This approach will require a system of systems approach that uses many AI techniques: neural networks, fuzzy logic, and cognitive architectures.


2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Yuhuan Zhang

The paper investigates the observer design for a core circadian rhythm network inDrosophilaandNeurospora. Based on the constructed highly nonlinear differential equation model and the recently proposed graphical approach, we design a rather simple observer for the circadian rhythm oscillator, which can well track the state of the original system for various input signals. Numerical simulations show the effectiveness of the designed observer. Potential applications of the related investigations include the real-world control and experimental design of the related biological networks.


2009 ◽  
Vol 2009 ◽  
pp. 1-7
Author(s):  
Tadashi Yamazaki ◽  
Shigeru Tanaka

Reservoir computing (RC) is a new framework for neural computation. A reservoir is usually a recurrent neural network with fixed random connections. In this article, we propose an RC model in which the connections in the reservoir are modifiable. Specifically, we consider correlation-based learning (CBL), which modifies the connection weight between a given pair of neurons according to the correlation in their activities. We demonstrate that CBL enables the reservoir to reproduce almost the same spatiotemporal activity patterns in response to an identical input stimulus in the presence of noise. This result suggests that CBL enhances the robustness in the generation of the spatiotemporal activity pattern against noise in input signals. We apply our RC model to trace eyeblink conditioning. The reservoir bridged the gap of an interstimulus interval between the conditioned and unconditioned stimuli, and a readout neuron was able to learn and express the timed conditioned response.


2021 ◽  
Vol 11 (24) ◽  
pp. 11585
Author(s):  
Muhammad Muneeb ◽  
Kwang-Man Ko ◽  
Young-Hoon Park

The emergence of new technologies and the era of IoT which will be based on compute-intensive applications. These applications will increase the traffic volume of today’s network infrastructure and will impact more on emerging Fifth Generation (5G) system. Research is going in many details, such as how to provide automation in managing and configuring data analysis tasks over cloud and edges, and to achieve minimum latency and bandwidth consumption with optimizing task allocation. The major challenge for researchers is to push the artificial intelligence to the edge to fully discover the potential of the fog computing paradigm. There are existing intelligence-based fog computing frameworks for IoT based applications, but research on Edge-Artificial Intelligence (Edge-AI) is still in its initial stage. Therefore, we chose to focus on data analytics and offloading in our proposed architecture. To address these problems, we have proposed a prototype of our architecture, which is a multi-layered architecture for data analysis between cloud and fog computing layers to perform latency- sensitive analysis with low latency. The main goal of this research is to use this multi-layer fog computing platform for enhancement of data analysis system based on IoT devices in real-time. Our research based on the policy of the OpenFog Consortium which will offer the good outcomes, but also surveillance and data analysis functionalities. We presented through case studies that our proposed prototype architecture outperformed the cloud-only environment in delay-time, network usage, and energy consumption.


Author(s):  
Susmita Chennareddy ◽  
Roshini Kalagara ◽  
Stavros Matsoukas ◽  
Jacopo Scaggiante ◽  
Colton Smith ◽  
...  

Introduction : Stroke is a leading cause of morbidity and mortality worldwide, with hemorrhagic strokes accounting for 10–20% of all strokes. Patients presenting with intracerebral hemorrhage (ICH) often face higher rates of mortality and poorer prognosis than those with other stroke types. As ICH treatment relies on in‐hospital neuroimaging findings, one potential barrier in the effective management of ICH includes increased time to ICH detection and treatment, particularly due to delays in imaging interpretation in busy hospitals and emergency departments. Artificial Intelligence (AI) driven software has recently been developed and become commercially available for the detection of Intracranial Hemorrhage (ICH) and Chronic Cerebral Microbleeds (CMBs). Such adjunct tools may enhance patient care by decreasing time to treatment and diagnosis by helping to adjudicate diagnoses in difficult cases. This systematic review aims to describe the current literature surrounding all currently existing AI algorithms for ICH detection with either non‐contrast computed tomography (CT) scans or CMBs detection with magnetic resonance imaging (MRI). Methods : Following PRISMA guidelines, MEDLINE and EMBASE were searched for studies published through March 1st, 2021, and all studies investigating AI algorithms for hemorrhage detection in non‐contrast CT scans or CMBs detection on MRI scans were eligible for inclusion. Any studies focusing on AI for hemorrhage segmentation only, including studies that enrolled patients with hemorrhages only as their study group, were excluded. Extracted data included development methods, training, validation and testing datasets, and accuracy metrics for each algorithm, when available. Meta‐analysis was not conducted due to heterogeneity in reported accuracy metrics and highly variant algorithmic development. The completed protocol is available for review in the PROSPERO registry. Results : After the removal of duplicates, a total of 609 studies were identified and screened. After an initial screening and full text review, 40 studies were included in this review. Of these, 18 tested a 2‐Dimensional (2D) convolutional neural network (CNN) AI algorithm, 3 used a purley 3‐Dimension (3D) CNN, and 2 utilized a hybrid 2D‐3D CNN. Of note, one software was able to identify ICH in the setting of ischemic stroke using MRI scans. Included papers noted the following challenges when developing these AI algorithms: extensive time required to create suitable datasets, the volumetric nature of the imaging exams, fine tuning the system, and focusing on the reduction of false positives. Diagnostic accuracy data was available for 21 of these studies, which reported a mean accuracy of 94.37% and a mean AUC of 0.958. Conclusions : As reported in this study, many AI‐driven software tools have been developed over the last 5 years. These tools have high diagnostic accuracy on average and have the potential to contribute to the diagnosis of ICH or CMBs with expert‐level accuracy. With time to treatment often dependent on time to diagnosis, this AI software may increase both the speed and accuracy of adjudicating diagnoses. Although there have been several obstacles faced by the developers of these algorithms, AI‐driven software is an important frontier for the future of clinical medicine.


Sign in / Sign up

Export Citation Format

Share Document