Biologically Inspired Artificial Intelligence for Computer Games
Latest Publications


TOTAL DOCUMENTS

14
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781591406464, 9781591406488

Author(s):  
Darryl Charles ◽  
Colin Fyfe ◽  
Daniel Livingstone ◽  
Stephen McGlinchey

This book centres on biologically inspired machine learning algorithms for use in computer and video game technology. One of the important reasons for employing learning in computer games is that there is a strong desire by many developers and publishers within the industry to make games adaptive. For example, Manslow (2002) states, ‘The widespread adoption of learning in games will be one of the most important advances ever to be made in game AI. Genuinely adaptive AIs will change the way in which games are played by forcing each player to continually search for new strategies to defeat the AI, rather than perfecting a single technique.’ However, the majority of learning techniques to date that have been used in commercial games have employed an offline learning process, that is, the algorithms are trained during the development process and not during the gameplay sessions after the release of the game. Online learning—that is, learning processes that occur during actual gameplay—has been used in only a handful of commercial games, for example, Black and White, but the use of learning online within games is intrinsically linked to adaptivity and the use of the algorithms in this way needs to be explored more fully.


Author(s):  
Darryl Charles ◽  
Colin Fyfe ◽  
Daniel Livingstone ◽  
Stephen McGlinchey

Just as there are many different types of supervised and unsupervised learning, so there are many different types of reinforcement learning. Reinforcement learning is appropriate for an AI or agent which is actively exploring its environment and also actively exploring what actions are best to take in different situations. Reinforcement learning is so-called because, when an AI performs a beneficial action, it receives some reward which reinforces its tendency to perform that beneficial action again. An excellent overview of reinforcement learning (on which this brief chapter is based) is by Sutton and Barto (1998).


Author(s):  
Darryl Charles ◽  
Colin Fyfe ◽  
Daniel Livingstone ◽  
Stephen McGlinchey

We noted in the previous chapters that, while the multilayer perceptron is capable of approximating any continuous function, it can suffer from excessively long training times. In this chapter we will investigate methods of shortening training times for artificial neural networks using supervised learning. (Haykin, 1999) is a particularly good reference for radial basis function, RBF, networks. In this chapter we outline the theory and implementation of a RBF network before demonstrating how such a network may be used to solve one of the previously visited problems, and compare our solutions.


Author(s):  
Darryl Charles ◽  
Colin Fyfe ◽  
Daniel Livingstone ◽  
Stephen McGlinchey

In this short chapter we present a case study of the use of ANN in a video game type situation. The example is one of duelling robots, a problem which, as we will see, lends itself to a range of different solutions – and where we can demonstrate the efficacy of a biologically inspired AI approach.


Author(s):  
Darryl Charles ◽  
Colin Fyfe ◽  
Daniel Livingstone ◽  
Stephen McGlinchey

We now consider the problem of introducing more intelligence into the artificial intelligence’s responses in real-time strategy games (RTS). We discuss how the paradigm of artificial immune systems (AIS) gives us an effective model to improve the AI’s responses and demonstrate with simple games how the AIS work. We further discuss how the AIS paradigm enables us to extend current games in ways which make the game more sophisticated for both human and AI. In this chapter, we show how strategies may be dynamically created and utilised by an artificial intelligence in a real-time strategy (RTS) game. We develop as simple as possible RTS games in order to display the power of the method we use.


Author(s):  
Darryl Charles ◽  
Colin Fyfe ◽  
Daniel Livingstone ◽  
Stephen McGlinchey

With the artificial neural networks which we have met so far, we must have a training set on which we already have the answers to the questions which we are going to pose to the network. Yet humans appear to be able to learn (indeed some would say can only learn) without explicit supervision. The aim of unsupervised learning is to mimic this aspect of human capabilities and hence this type of learning tends to use more biologically plausible methods than those using the error descent methods of the last two chapters. The network must self-organise and to do so, it must react to some aspect of the input data - typically either redundancy in the input data or clusters in the data; i.e. there must be some structure in the data to which it can respond.


Author(s):  
Darryl Charles ◽  
Colin Fyfe ◽  
Daniel Livingstone ◽  
Stephen McGlinchey

In this chapter we will look at supervised learning in more detail, beginning with one of the simplest (and earliest) supervised neural learning algorithms – the Delta Rule. The objectives of this chapter are to provide a solid grounding in the theory and practice of problem solving with artificial neural networks – and an appreciation of some of the challenges and practicalities involved in their use.


Author(s):  
Darryl Charles ◽  
Colin Fyfe ◽  
Daniel Livingstone ◽  
Stephen McGlinchey

It is very evident that current progress in developing realistic and believable game AI lags behind that in developing realistic graphical and physical models. For example, in the years between the development of Neverwinter Nights by Bioware and the release of its sequel Neverwinter Nights 2 by Obsidian in collaboration with Bioware there were obvious and significant advances in the graphics. The character models in the first game are decidedly angular, the result of having limited resources to expend on the polygons required for simulating the appearance of natural curves and body shapes. No such problems now. A few years, and the difference is remarkable.


Author(s):  
Darryl Charles ◽  
Colin Fyfe ◽  
Daniel Livingstone ◽  
Stephen McGlinchey

Multi-Objective Problems, MOP, are a class of problems for which different, competing, objectives are to be satisfied and for which there is generally no single best solution – but rather for which a set of solutions may exist which are all equally as good. In commercial real-time strategy, RTS, games, designers put a lot of effort into trying to create games where a variety of strategies and tactics can be employed and where (ideally) no single simple optimal strategy exists. Indeed, a great deal of effort may be spent in ‘balancing’ the game to ensure that the main strategies and units all have effective counters (Rollings and Morris, 1999). It may be the case, then, that RTS games may be considered as MOP. If not in terms of the overall goal of winning the game, which is clearly a single overriding objective, then in terms of the many different objectives that must be met in order to achieve victory. There may be a number of strong, potentially winning strategies, each of which is formed from the combination of a large number of tactical and strategic decisions – and where improvement in one area will lead to increasing a weakness elsewhere.


Author(s):  
Darryl Charles ◽  
Colin Fyfe ◽  
Daniel Livingstone ◽  
Stephen McGlinchey
Keyword(s):  

The last two chapters introduced the standard GA, presented an example case study and explored some of the potential pitfalls in using evolutionary methods. This chapter focuses on a number of extensions and variations of the standard GA. While there is not room to cover them all, many extensions to GAs have been proposed usually in response to situations in which the simple GA does not perform particularly well.


Sign in / Sign up

Export Citation Format

Share Document