Encyclopedia of Artificial Intelligence
Latest Publications


TOTAL DOCUMENTS

237
(FIVE YEARS 0)

H-INDEX

6
(FIVE YEARS 0)

Published By IGI Global

9781599048499, 9781599048505

Author(s):  
T. T. Wong ◽  
C. W. Leung

Recent advances in the applications of ANN have demonstrated successful cases in time series analysis, data mining, civil engineering, financial analysis, music creation, fishing prediction, production scheduling, intruder detection, etc., making them an important tool for research and development[1]. ANN and evolutionary computation(EC) techniques have been employed successfully in solving real-world problems including those with a temporal component[2]. In another work[3], a hybrid method based on a combination of evolutionary computation and neural network(NN) has been used to predict time series. In the world of databases, various ANN-based strategies have been used for knowledge search and extraction[4]. Intelligent neural systems have been constructed with the aid of genetic algorithm-based EC techniques and these systems have been applied in breast cancer diagnosis[5]. Genetic algorithms(GA) have been applied to develop a general method of selecting the most relevant subset of variables in the field of analytical chemistry to classify apple beverages[6]. New ANN methods enable civil engineers to use computing in different ways. Besides as a tool in urban storm drainage[7], ANN and Genetic Programming(GP) have been implemented in the prediction and modelling of the flow of a typical urban basin [8]. In the latter case, it was shown that these two techniques could be combined in order to design a real-time alarm system for floods or subsidence warning in various types of urban basins. ANN models for consistency, measured by slump, in the case of conventional concrete have also been developed[9]. In a time series prediction of the quarterly values of the medical component of the Consumer Price Index(CPI), the results obtained with both neural and functional networks have been shown to be quite similar[10]. Dimensionality reduction, variable reduction, hybrid networks, normal fuzzy and ANN have been applied to predict bond rating[11]. A recent online survey through the ISI Web of Knowledge using keywords such as “ANN” and “thermal design” would reveal only ten relevant SCI publications[12]. In the area of food processing, ANN was used to predict the maximum or minimum temperature reached in the sample after pressurization and the time needed for thermal re-equilibration[13]. The accurate determination of thermophysical properties of milk is very important for design, simulation, optimization, and control of food processing such as evaporation, heat exchanging, spray drying, and so forth. Generally, polynomial methods are used for prediction of these properties based on empirical correlation to experimental data. However, it was found that ANN presented a better prediction capability of specific heat, thermal conductivity, and density of milk than polynomial modeling and it was suggested as a reasonable alternative to empirical modeling for thermophysical properties of foods[14]. Numerical simulation of natural circulation boiling water reactor is important in order to study its performance for different designs and under various off-design conditions. It was found that very fast numerical simulations, useful for extensive parametric studies and for solving design optimization problems, can be achieved by using an ANN model of the system[15]. ANN models and GA were applied for developing prediction models and for optimization of constant temperature retort thermal processing of conduction heating foods[16]. ANN technique has been used as a new approach to determine the exergy losses of an ejector-absorption heat transformer (EAHT)[17]. The results show that the ANN approach has the advantages of computational speed, low cost for feasibility, rapid turnaround, which is especially important during iterative design phases, and easy of design by operators with little technical experience. Computational fluid dynamics approach is often employed for heat transfer analysis of a ball grid array(BGA) package that is widely used in the modern electronics industry. Owing to the complicated geometric configuration of the BGA package, an ANN was trained to establish the relationship between the geometry input and the thermal resistance output[18]. The results of this study provide the electronic packaging industry with a reliable and rapid method for heat dissipation design of BGA packages. Thermal spraying is a versatile technique of coating manufacturing implementing large variety of materials and processes. An ANN was developed to relate processing parameters to properties of alumina-titania ceramic coatings[19]. Predicted results show globally a well agreement with the experimental values. It can be seen that applications of ANN in thermal design is scarce and this article aims to explore the application of an ANN in gas-fired cooktop burner design.


Author(s):  
Angelo Loula ◽  
João Queiroz

The topic of representation acquisition, manipulation and use has been a major trend in Artificial Intelligence since its beginning and persists as an important matter in current research. Particularly, due to initial focus on development of symbolic systems, this topic is usually related to research in symbol grounding by artificial intelligent systems. Symbolic systems, as proposed by Newell & Simon (1976), are characterized as a highlevel cognition system in which symbols are seen as “[lying] at the root of intelligent action” (Newell and Simon, 1976, p.83). Moreover, they stated the Physical Symbol Systems Hypothesis (PSSH), making the strong claim that “a physical symbol system has the necessary and sufficient means for general intelligent action” (p.87). This hypothesis, therefore, sets equivalence between symbol systems and intelligent action, in such a way that every intelligent action would be originated in a symbol system and every symbol system is capable of intelligent action. The symbol system described by Newell and Simon (1976) is seen as a computer program capable of manipulating entities called symbols, ‘physical patterns’ combined in expressions, which can be created, modified or destroyed by syntactic processes. Two main capabilities of symbol systems were said to provide the system with the properties of closure and completeness, and so the system itself could be built upon symbols alone (Newell & Simon, 1976). These capabilities were designation – expressions designate objects – and interpretation – expressions could be processed by the system. The question was, and much of the criticism about symbol systems came from it, how these systems, built upon and manipulating just symbols, could designate something outside its domain. Symbol systems lack ‘intentionality’, stated John Searle (1980), in an important essay in which he described a widely known mental experiment (Gedankenexperiment), the ‘Chinese Room Argument’. In this experiment, Searle places himself in a room where he is given correlation rules that permits him to determine answers in Chinese to question also in Chinese given to him, although Searle as the interpreter knows no Chinese. To an outside observer (who understands Chinese), the man in this room understands Chinese quite well, even though he is actually manipulating non-interpreted symbols using formal rules. For an outside observer the symbols in the questions and answers do represent something, but for the man in the room the symbols lack intentionality. The man in the room acts like a symbol system, which relies only in symbolic structures manipulation by formal rules. For such systems, the manipulated tokens are not about anything, and so they cannot even be regarded as representations. The only intentionality that can be attributed to these symbols belongs to who ever uses the system, sending inputs that represent something to them and interpreting the output that comes out of the system. (Searle, 1980) Therefore, intentionality is the important feature missing in symbol systems. The concept of intentionality is of aboutness, a “feature of certain mental states by which they are directed at or about objects and states of affairs in the world” (Searle, 1980), as a thought being about a certain place.1 Searle (1980) points out that a ‘program’ itself can not achieve intentionality, because programs involve formal relations and intentionality depends on causal relations. Along these lines, Searle leaves a possibility to overcome the limitations of mere programs: ‘machines’ – physical systems causally connected to the world and having ‘causal internal powers’ – could reproduce the necessary causality, an approach in the same direction of situated and embodied cognitive science and robotics. It is important to notice that these ‘machines’ should not be just robots controlled by a symbol system as described before. If the input does not come from a keyboard and output goes to a monitor, but rather came in from a video camera and then out to motors, it would not make a difference since the symbol system is not aware of this change. And still in this case, the robot would not have intentional states (Searle 1980). Symbol systems should not depend on formal rules only, if symbols are to represent something to the system. This issue brought in another question, how symbols could be connected to what they represent, or, as stated by Harnad (1990) defining the Symbol Grounding Problem: “How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols?” The Symbol Grounding Problem, therefore, reinforces two important matters. First that symbols do not represent anything to a system, at least not what they were said to ‘designate’. Only someone operating the system could recognize those symbols as referring to entities outside the system. Second, the symbol system cannot hold its closure in relating symbols only with other symbols; something else should be necessary to establish a connection between symbols and what they represent. An analogy made by Harnad (1990) is with someone who knows no Chinese but tries to learn Chinese from a Chinese/Chinese dictionary. Since terms are defined by using other terms and none of them is known before, the person is kept in a ‘dictionary-goround’ without ever understanding those symbols. The great challenge for Artificial Intelligence researchers then is to connect symbols to what they represent, and also to identify the consequences that the implementation of such connection would make to a symbol system, e.g. much of the descriptions of symbols by means of other symbols would be unnecessary when descriptions through grounding are available. It is important to notice that the grounding process is not just about giving sensors to an artificial system so it would be able to ‘see’ the world, since it ‘trivializes’ the symbol grounding problem and ignores the important issue about how the connection between symbols and objects are established (Harnad, 1990).


Author(s):  
Amanda J.C. Sharkey

Swarm Robotics is a biologically inspired approach to the organisation and control of groups of robots. Its biological inspiration is mainly drawn from social insects, but also from herding and flocking phenomena in mammals and fish. The promise of emulating some of the efficient organisational principles of biological swarms is an alluring one. In biological systems such as colonies of ants, sophisticated cooperative behaviour emerges despite the simplicity of the individual members, and the absence of centralised control and explicit directions. Such societies are able to maintain themselves as a collective, and to accomplish coordinated actions such as those required to construct and maintain nests, to find food, and to raise their young. The central idea behind swarm robotics is to find similar ways of coordinating and controlling collections of robots.


Author(s):  
Jesús Bernardino Alonso Hernández ◽  
Patricia Henríquez Rodríguez

It is possible to implement help systems for diagnosis oriented to the evaluation of the fonator system using speech signal, by means of techniques based on expert systems. The application of these techniques allows the early detection of alterations in the fonator system or the temporary evaluation of patients with certain treatment, to mention some examples. The procedure of measuring the voice quality of a speaker from a digital recording consists of quantifying different acoustic characteristics of speech, which makes it possible to compare it with certain reference patterns, identified previously by a “clinical expert”. A speech acoustic quality measurement based on an auditory assessment is very hard to assess as a comparative reference amongst different voices and different human experts carrying out the assessment or evaluation. In the current bibliography, some attempts have been made to obtain objective measures of speech quality by means of multidimensional clinical measurements based on auditory methods. Well-known examples are: GRBAS scale from Japon (Hirano, M.,1981) and its extension developed and applied in Europe (Dejonckere, P. H. Remacle, M. Fresnel-Elbaz, E. Woisard, V. Crevier- Buchman, L. Millet, B.,1996), a set of perceptual and acoustic characteristics in Sweden (Hammarberg, B. & Gauffin, J., 1995), a set of phonetics characteristics with added information about the excitement of the vocal tract. The aim of these (quality speech measurements) procedures is to obtain an objective measurement from a subjective evaluation. There exist different works in which objective measurements of speech quality obtained from a recording are proposed (Alonso J. B.,2006), (Boyanov, B & Hadjitodorov, S., 1997),(Hansen, J.H.L., Gavidia-Ceballos, L. & Kaiser, J.F., 1998),(Stefan Hadjitodorov & Petar Mitev, 2002),(Michaelis D.; Frohlich M. & Strube H. W. ,1998),(Boyanov B., Doskov D., Mitev P., Hadjitodorov S. & Teston B.,2000),(Godino-Llorente, J.I.; Aguilera-Navarro, S. & Gomez-Vilda, P. , 2000). In these works a voiced sustained sound (usually a vowel) is recorded and then used to compute speech quality measurements. The utilization of a voiced sustained sound is due to the fact that during the production of this kind of sound, the speech system uses almost all its mechanisms (glottal flow of constant air, vocal folds vibration in a continuous way, …), enabling us to detect any anomaly in these mechanisms. In these works different sets of measurements are suggested in order to quantify speech quality objectively. In all these works one important fact is revealed; it is necessary to obtain different measurements of the speech signal in order to compile the different aspects of acoustic characteristics of the speech signal.


Author(s):  
Ioanna Roussaki ◽  
Ioannis Papaioannou ◽  
Miltiades Anagnostou

In the artificial intelligence domain, an emerging research field that rapidly gains momentum is Automated Negotiations (Fatima, Wooldridge, & Jennings, 2007) (Buttner, 2006). In this framework, building intelligent agents (Silva, Romão, Deugo, & da Silva, 2001) adequate for participating in negotiations and acting autonomously on behalf of their owners is a very challenging research topic (Saha, 2006) (Jennings, Faratin, Lomuscio, Parsons, Sierra, & Wooldridge, 2001). In automated negotiations, three main items need to be specified (Faratin, Sierra, & Jennings, 1998) (Rosenschein, & Zlotkin, 1994): (i) the negotiation protocol & model, (ii) the negotiation issues, and (iii) the negotiation strategies that the agents will employ. According to (Walton, & Krabbe, 1995), “Negotiation is a form of interaction in which a group of agents, with conflicting interests and a desire to cooperate try to come to a mutually acceptable agreement on the division of scarce resources”. These resources do not only refer to money, but also include other parameters, over which the agents’ owners are willing to negotiate, such as product quality features, delivery conditions, guarantee, etc. (Maes, Guttman, & Moukas, 1999) (Sierra, 2004). In this framework, agents operate following predefined rules and procedures specified by the employed negotiation protocol (Rosenschein, & Zlotkin, 1994), aiming to address the requirements of their human or corporate owners as much as possible. Furthermore, the negotiating agents use a reasoning model based on which their responses to their opponent’s offers are formulated (Muller, 1996). This policy is widely known as the negotiation strategy of the agent (Li, Su, & Lam, 2006). This paper elaborates on the design of negotiation strategies for autonomous agents. The proposed strategies are applicable in cases where the agents have strict deadlines and they negotiate with a single party over the value of a single parameter (single-issue bilateral negotiations). Learning techniques based on MLP and GR Neural Networks (NNs) are employed by the client agents, in order to predict their opponents’ behaviour and achieve a timely detection of unsuccessful negotiations. The proposed NN-assisted strategies have been evaluated and turn out to be highly effective with regards to the duration reduction of the negotiation threads that cannot lead to agreements. The rest of the paper is structured as follows. In the second section, the basic principles of the designed negotiation framework are presented, while the formal problem statement is provided. The third section elaborates on the NN-assisted strategies designed and provides the configuration details of the NNs employed. The fourth section presents the experiments conducted, while the fifth section summarizes and evaluates the results of these experiments. Finally, in the last section, conclusions are drawn and future research plans are exposed.


Author(s):  
José García-Rodríguez ◽  
Francisco Flórez-Revuelta ◽  
Juan Manuel García-Chamizo

Self-organising neural networks try to preserve the topology of an input space by means of their competitive learning. This capacity has been used, among others, for the representation of objects and their motion. In this work we use a kind of self-organising network, the Growing Neural Gas, to represent deformations in objects along a sequence of images. As a result of an adaptive process the objects are represented by a topology representing graph that constitutes an induced Delaunay triangulation of their shapes. These maps adapt the changes in the objects topology without reset the learning process.


Author(s):  
Javier Bajo ◽  
Dante I. Tapia ◽  
Sara Rodríguez ◽  
Juan M. Corchado

Agents and Multi-Agent Systems (MAS) have become increasingly relevant for developing distributed and dynamic intelligent environments. The ability of software agents to act somewhat autonomously links them with living animals and humans, so they seem appropriate for discussion under nature-inspired computing (Marrow, 2000). This paper presents AGALZ (Autonomous aGent for monitoring ALZheimer patients), and explains how this deliberative planning agent has been designed and implemented. A case study is then presented, with AGALZ working with complementary agents into a prototype environment-aware multi-agent system (ALZ-MAS: ALZheimer Multi-Agent System) (Bajo, Tapia, De Luis, Rodríguez & Corchado, 2007). The elderly health care problem is studied, and the possibilities of Radio Frequency Identification (RFID) (Sokymat, 2006) as a technology for constructing an intelligent environment and ascertaining patient location to generate plans and maximize safety are examined. This paper focuses in the development of natureinspired deliberative agents using a Case-Based Reasoning (CBR) (Aamodt & Plaza, 1994) architecture, as a way to implement sensitive and adaptive systems to improve assistance and health care support for elderly and people with disabilities, in particular with Alzheimer. Agents in this context must be able to respond to events, take the initiative according to their goals, communicate with other agents, interact with users, and make use of past experiences to find the best plans to achieve goals, so we propose the development of an autonomous deliberative agent that incorporates a Case-Based Planning (CBP) mechanism, derivative from Case-Based Reasoning (CBR) (Bajo, Corchado & Castillo, 2006), specially designed for planning construction. CBP-BDI facilitates learning and adaptation, and therefore a greater degree of autonomy than that found in pure BDI (Believe, Desire, Intention) architecture (Bratman, 1987). BDI agents can be implemented by using different tools, such as Jadex (Pokahr, Braubach & Lamersdorf, 2003), dealing with the concepts of beliefs, goals and plans, as java objects that can be created and handled within the agent at execution time.


Author(s):  
Manuel Lama ◽  
Eduardo Sánchez

In the last years, the growing of the Internet have opened the door to new ways of learning and education methodologies. Furthermore, the appearance of different tools and applications has increased the need for interoperable as well as reusable learning contents, teaching resources and educational tools (Wiley, 2000). Driven by this new environment, several metadata specifications describing learning resources, such as IEEE LOM (LTCS, 2002) or Dublin Core (DCMI, 2004), and learning design processes (Rawlings et al., 2002) have appeared. In this context, the term learning design is used to describe the method that enables learners to achieve learning objectives after a set of activities are carried out using the resources of an environment. From the proposed specifications, the IMS (IMS, 2003) has emerged as the de facto standard that facilitates the representation of any learning design that can be based on a wide range of pedagogical techniques. The metadata specifications are useful solutions to describe educational resources in order to favour the interoperability and reuse between learning software platforms. However, the majority of the metadata standards are just focused on determining the vocabulary to represent the different aspects of the learning process, while the meaning of the metadata elements is usually described in natural language. Although this description is easy to understand for the learning participants, it is not appropriate for software programs designed to process the metadata. To solve this issue, ontologies (Gómez-Pérez, Fernández-López, and Corcho, 2004) could be used to describe formally and explicitly the structure and meaning of the metadata elements; that is, an ontology would semantically describe the metadata concepts. Furthermore, both metadata and ontologies emphasize that its description must be shared (or standardized) for a given community. In this paper, we present a short review of the main ontologies developed in last years in the Education field, focusing on the use that authors have given to the ontologies. As we will show, ontologies solve issues related with the inconsistencies of using natural language descriptions and with the consensous for managing the semantics of a given specification.


Author(s):  
Jesús Bernardino Alonso Hernández ◽  
Patricia Henríquez Rodríguez

The field of nonlinear signal characterization and nonlinear signal processing has attracted a growing number of researchers in the past three decades. This comes from the fact that linear techniques have some limitations in certain areas of signal processing. Numerous nonlinear techniques have been introduced to complement the classical linear methods and as an alternative when the assumption of linearity is inappropriate. Two of these techniques are higher order statistics (HOS) and nonlinear dynamics theory (chaos). They have been widely applied to time series characterization and analysis in several fields, especially in biomedical signals. Both HOS and chaos techniques have had a similar evolution. They were first studied around 1900: the method of moments (related to HOS) was developed by Pearson and in 1890 Henri Poincaré found sensitive dependence on initial conditions (a symptom of chaos) in a particular case of the three-body problem. Both approaches were replaced by linear techniques until around 1960, when Lorenz rediscovered by coincidence a chaotic system while he was studying the behaviour of air masses. Meanwhile, a group of statisticians at the University of California began to explore the use of HOS techniques again. However, these techniques were ignored until 1980 when Mendel (Mendel, 1991) developed system identification techniques based on HOS and Ruelle (Ruelle, 1979), Packard (Packard, 1980), Takens (Takens, 1981) and Casdagli (Casdagli, 1989) set the methods to model nonlinear time series through chaos theory. But it is only recently that the application of HOS and chaos in time series has been feasible thanks to higher computation capacity of computers and Digital Signal Processing (DSP) technology. The present article presents the state of the art of two nonlinear techniques applied to time series analysis: higher order statistics and chaos theory. Some measurements based on HOS and chaos techniques will be described and the way in which these measurements characterize different behaviours of a signal will be analized. The application of nonlinear measurements permits more realistic characterization of signals and therefore it is an advance in automatic systems development.


Author(s):  
J. Francisco Vargas ◽  
Miguel A. Ferrer

Biometric offers potential for automatic personal identification and verification, differently from other means for personal verification; biometric means are not based on the possession of anything (as cards) or the knowledge of some information (as passwords). There is considerable interest in biometric authentication based on automatic signature verification (ASV) systems because ASV has demonstrated to be superior to many other biometric authentication techniques e.g. finger prints or retinal patterns, which are reliable but much more intrusive and expensive. An ASV system is a system capable of efficiently addressing the task of make a decision whether a signature is genuine or forger. Numerous pattern recognition methods have been applied to signature verification. Among the methods that have been proposed for pattern recognition on ASV, two broad categories can be identified: memory-based and parameter-based methods as a neural network. The Major approaches to ASV systems are the template matching approach, spectrum approach, spectrum analysis approach, neural networks approach, cognitive approach and fractal approach. The proposed article reviews ASV techniques corresponding with approaches that have so far been proposed in the literature. An attempt is made to describe important techniques especially those involving ANNs and assess their performance based on published literature. The paper also discusses possible future areas for research using ASV.


Sign in / Sign up

Export Citation Format

Share Document