Computational Neuroscience for Advancing Artificial Intelligence
Latest Publications


TOTAL DOCUMENTS

13
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781609600211, 9781609600235

Author(s):  
Dómhnall J. Jennings ◽  
Eduardo Alonso ◽  
Esther Mondragón ◽  
Charlotte Bonardi

Standard associative learning theories typically fail to conceptualise the temporal properties of a stimulus, and hence cannot easily make predictions about the effects such properties might have on the magnitude of conditioning phenomena. Despite this, in intuitive terms we might expect that the temporal properties of a stimulus that is paired with some outcome to be important. In particular, there is no previous research addressing the way that fixed or variable duration stimuli can affect overshadowing. In this chapter we report results which show that the degree of overshadowing depends on the distribution form - fixed or variable - of the overshadowing stimulus, and argue that conditioning is weaker under conditions of temporal uncertainty. These results are discussed in terms of models of conditioning and timing. We conclude that the temporal difference model, which has been extensively applied to the reinforcement learning problem in machine learning, accounts for the key findings of our study.


Author(s):  
Mohammadreza Asghari-Oskoei ◽  
Huosheng Hu

Myoelectric signal is known as an alternative human-machine interface (HMI) for people with motor disability in dealing with assisting robots and rehabilitation devices. This chapter examines a myoelectric HMI in real-time application and compares its performance with traditional tools. It also studies the manifestation of fatigue in long-term muscular activities and its impact on ultimate performance. The core of applied HMI is built on the support vector machine as a classifier. The experiments confirm that the myoelectric HMI is a reliable alternative to traditional HMI. Meanwhile, they show a significant decline in the dominant frequency of myoelectric signals during long-term applications.


Author(s):  
Phil Husbands ◽  
Andy Philippides ◽  
Anil K. Seth

This chapter reviews the use of neural systems in robotics, with particular emphasis on strongly biologically inspired neural networks and methods. As well as describing work at the research frontiers, the paper provides some historical background in order to clarify the motivations and scope of work in this field. There are two major sections that make up the bulk of the chapter: one surveying the application of artificial neural systems to robot control, and one describing the use of robots as tools in neuroscience. The former concentrates on biologically derived neural architectures and methods used to drive robot behaviours, and the latter introduces a closely related area of research where robotic models are used as tools to study neural mechanisms underlying the generation of adaptive behaviour in animals and humans.


Author(s):  
Ulrich Nehmzow

Mobile robotics can be a useful tool for the life scientist in that they combine perception, computation and action, and are therefore comparable to living beings. They have, however, the distinct advantage that their behaviour can be manipulated by changing their programs and/or their hardware. In this chapter, quantitative measurements of mobile robot behaviour and a theory of robot-environment interaction that can easily be applied to the analysis of behaviour of mobile robots and animals is presented. Interestingly such an analysis is based on chaos theory.


Author(s):  
David Bisset

This chapter explores the challenges presented by the introduction of robots into our everyday lives, examining technical and design issues as well as ethical and business issues. It also examines the process of designing and specifying useful robots and highlights the practical difficulties in testing and guaranteeing behaviour and function in adaptive systems. The chapter also briefly reviews the current state of robotics in Europe and the global robotic marketplace. It argues that it is essential, for the generation of a viable industry, for the Academic and Business sectors to work together to solve the fundamental technical and ethical problems that can potentially impede the development and deployment of autonomous robotic systems. It details the reality and expectations in healthcare robotics examining the demographics and deployment difficulties this domain will face. Finally it challenges the assumption that Neural Computation is the technology of choice for building autonomous cognitive systems and points out the difficulties inherent in using adaptive “holistic” systems within the performance oriented ethos of the product design engineer.


Author(s):  
Rosemary A. Cowell ◽  
Timothy J. Bussey ◽  
Lisa M. Saksida

The authors present a series of studies in which computational models are used as a tool to examine the organization and function of the ventral visual-perirhinal stream in the brain. The prevailing theoretical view in this area of cognitive neuroscience holds that the object-processing pathway has a modular organization, in which visual perception and visual memory are carried out independently. They use computational simulations to demonstrate that the effects of brain damage on both visual discrimination and object recognition memory may not be due to an impairment in a specific function such as memory or perception, but are more likely due to compromised object representations in a hierarchical and continuous representational system. The authors argue that examining the nature of stimulus representations and their processing in cortex is a more fruitful approach than attempting to map cognition onto functional modules.


Author(s):  
Robert C. Honey ◽  
Christopher S. Grand

Here the authors examine the nature of the mnemonic structures that underlie the ability of animals to learn configural discriminations that are allied to the XOR problem. It has long been recognized that simple associative networks (e.g., perceptrons) fail to provide a coherent analysis for how animals learn this type of discrimination. Indeed “The inability of single layer perceptrons to solve XOR has a significance of mythical proportions in the history of connectionism.” (McLeod, Plunkett & Rolls, 1998; p. 106). In this historic context, the authors describe the results of recent experiments with animals that are inconsistent with the theoretical solution to XOR provided by some multi-layer connectionist models. The authors suggest a modification to these models that parallels the formal structure of XOR while maintaining two principles of perceptual organization and learning: contiguity and common fate.


Author(s):  
I.P.L. McLaren

In this chapter the author will first give an overview of the ideas behind Adaptively Parameterised Error Correcting Learning (APECS) as introduced in McLaren (1993). It will take a somewhat historical perspective, tracing the development of this approach from its origins as a solution to the sequential learning problem identified by McCloskey and Cohen (1989) in the context of paired associate learning, to its more recent application as a model of human contingency learning.


Author(s):  
Elliot A. Ludvig ◽  
Marc G. Bellemare ◽  
Keir G. Pearson

In the last 15 years, there has been a flourishing of research into the neural basis of reinforcement learning, drawing together insights and findings from psychology, computer science, and neuroscience. This remarkable confluence of three fields has yielded a growing framework that begins to explain how animals and humans learn to make decisions in real time. Mastering the literature in this sub-field can be quite daunting as this task can require mastery of at least three different disciplines, each with its own jargon, perspectives, and shared background knowledge. In this chapter, the authors attempt to make this fascinating line of research more accessible to researchers in any of the constitutive sub-disciplines. To this end, the authors develop a primer for reinforcement learning in the brain that lays out in plain language many of the key ideas and concepts that underpin research in this area. This primer is embedded in a literature review that aims not to be comprehensive, but rather representative of the types of questions and answers that have arisen in the quest to understand reinforcement learning and its neural substrates. Drawing on the basic findings in this research enterprise, the authors conclude with some speculations about how these developments in computational neuroscience may influence future developments in Artificial Intelligence.


Author(s):  
Edgar H. Vogel ◽  
Fernando P. Ponce

Pavlovian conditioning is a very simple and universal form of learning that has the benefit of a long and rich tradition of experimental work and quantitative theorization. With the development of interdisciplinary efforts, behavioral data and quantitative theories of conditioning have become progressively more important not just for experimental psychologists but also for broader audiences such as neurobiologists, computational neuroscientists and artificial intelligence workers. In order to provide interdisciplinary users with an overview of the state of affairs of theoretically oriented research in this field, this chapter reviews a few key mechanisms that are currently deemed necessary for explaining several critical phenomena of Pavlovian conditioning. The chapter is divided into several sections; each referring to a particular theoretical mechanism and to the type of phenomena that it has been designed to account. The progression of the sections reveals phenomena and mechanisms of increasing complexity, which is an indication of the theoretical sophistication that has been reached in this domain. Since there is not a single theory containing all mechanisms, they are described separately from their originating theories, emphasizing thus the fact that they might be used in almost any theoretical implementation.


Sign in / Sign up

Export Citation Format

Share Document