Distributed Artificial Intelligence, Agent Technology, and Collaborative Applications - Advances in Intelligent Information Technologies
Latest Publications


TOTAL DOCUMENTS

19
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781605661445, 9781605661452

Author(s):  
Xiannong Meng ◽  
Song Xing

This chapter reports the results of a project attempting to assess the performance of a few major search engines from various perspectives. The search engines involved in the study include the Microsoft Search Engine (MSE) when it was in its beta test stage, AllTheWeb, and Yahoo. In a few comparisons, other search engines such as Google, Vivisimo are also included. The study collects statistics such as the average user response time, average process time for a query reported by MSE, as well as the number of pages relevant to a query reported by all search engines involved. The project also studies the quality of search results generated by MSE and other search engines using RankPower as the metric. We found MSE performs well in speed and diversity of the query results, while weaker in other statistics, compared to some other leading search engines. The contribution of this chapter is to review the performance evaluation techniques for search engines and use different measures to assess and compare the quality of different search engines, especially MSE.


Author(s):  
Christian Hillbrand

The motivation for this chapter is the observation that many companies build their strategy upon poorly validated hypotheses about cause and effect of certain business variables. However, the soundness of these cause-and-effect-relations as well as the knowledge of the approximate shape of the functional dependencies underlying these associations turns out to be the biggest issue for the quality of the results of decision supporting procedures. Since it is sufficiently clear that mere correlation of time series is not suitable to prove the causality of two business concepts, there seems to be a rather dogmatic perception of the inadmissibility of empirical validation mechanisms for causal models within the field of strategic management as well as management science. However, one can find proven causality techniques in other sciences like econometrics, mechanics, neuroscience, or philosophy. Therefore this chapter presents an approach which applies a combination of well-established statistical causal proofing methods to strategy models in order to validate them. These validated causal strategy models are then used as the basis for approximating the functional form of causal dependencies by the means of Artificial Neural Networks. This in turn can be employed to build an approximate simulation or forecasting model of the strategic system.


Author(s):  
John M. Artz

Earlier work in the philosophical foundations of information modeling identified four key concepts in which philosophical groundwork must be further developed. This chapter reviews that earlier work and expands on one key area—the Problem of Universals—which is at the very heart of information modeling.


Author(s):  
Salvatore T. March ◽  
Gove N. Allen

Active information systems participate in the operation and management of business organizations. They create conceptual objects that represent social constructions, such as agreements, commitments, transactions, and obligations. They determine and ascribe attributes to both conceptual and concrete objects (things) that are of interest to the organization. Active information system infer conclusions based on the application of socially constructed and mutable rules constituting organizational policies and procedures that govern how conceptual and concrete objects are affected when defined and identified events occur. The ontological foundations for active information systems must include constructs that represent concrete and conceptual objects, their attributes, and the events that affect them. Events are a crucial component of conceptual models that represent active information systems. The representation of events must include ascribed attributes representing data values inherent in the event as well as rules defining how conceptual and concrete objects are affected when the event occurs. The state-history of an object can then be constructed and reconstructed by the sequence of events that have affected it. Alternate state-histories can be generated based on proposed or conjectured rule modifications, enabling a reinterpretation of history. Future states can be predicted based on proposed or conjectured events and event definitions. Such a conceptualization enables a parsimonious mapping between an active information system and the organizational system in which it participates.


Author(s):  
Ben Choi

Web mining aims for searching, organizing, and extracting information on the Web and search engines focus on searching. The next stage of Web mining is the organization of Web contents, which will then facilitate the extraction of useful information from the Web. This chapter will focus on organizing Web contents. Since a majority of Web contents are stored in the form of Web pages, this chapter will focus on techniques for automatically organizing Web pages into categories. Various artificial intelligence techniques have been used; however the most successful ones are classification and clustering. This chapter will focus on clustering. Clustering is well suited for Web mining by automatically organizing Web pages into categories each of which contain Web pages having similar contents. However, one problem in clustering is the lack of general methods to automatically determine the number of categories or clusters. For the Web domain, until now there is no such a method suitable for Web page clustering. To address this problem, this chapter describes a method to discover a constant factor that characterizes the Web domain and proposes a new method for automatically determining the number of clusters in Web page datasets. This chapter also proposes a new bi-directional hierarchical clustering algorithm, which arranges individual Web pages into clusters and then arranges the clusters into larger clusters and so on until the average inter-cluster similarity approaches the constant factor. Having the constant factor together with the algorithm, this chapter provides a new clustering system suitable for mining the Web.


Author(s):  
Aboul Ella Hassanien ◽  
Jafar M. Ali

This chapter presents an efficient algorithm to classify and retrieve images from large databases in the context of rough set theory. Color and texture are two well-known low-level perceptible features to describe an image contents used in this chapter. The features are extracted, normalized, and then the rough set dependency rules are generated directly from the real value attribute vector. Then the rough set reduction technique is applied to find all reducts of the data which contains the minimal subset of attributes that are associated with a class label for classification. We test three different popular distance measures in this work and find that quadratic distance measures provide the most accurate and perceptually relevant retrievals. The retrieval performance is measured using recall-precision measure, as is standard in all retrieval systems.


Author(s):  
Antonio Picariello

Information retrieval can take great advantages and improvements considering users’ feedbacks. Therefore, the user dimension is a relevant component that must be taken into account while planning and implementing real information retrieval systems. In this chapter, we first describe several concepts related to relevance feedback methods, and then propose a novel information retrieval technique which uses the relevance feedback concepts in order to improve accuracy in an ontology-based system. In particular, we combine the Semantic information from a general knowledge base with statistical information using relevance feedback. Several experiments and results are presented using a test set constituted of Web pages.


Author(s):  
Lars Werner

Text documents stored in information systems usually consist of more information than the pure concatenation of words, i.e., they also contain typographic information. Because conventional text retrieval methods evaluate only the word frequency, they miss the information provided by typography, e.g., regarding the importance of certain terms. In order to overcome this weakness, we present an approach which uses the typographical information of text documents and show how this improves the efficiency of text retrieval methods. Our approach uses weighting of typographic information in addition to term frequencies for separating relevant information in text documents from the noise. We have evaluated our approach on the basis of automated text classification algorithms. The results show that our weighting approach achieves very competitive classification results using at most 30% of the terms used by conventional approaches, which makes our approach significantly more efficient.


Author(s):  
Mehdi Yousfi-Monod

The work described in this chapter tackles learning and communication between cognitive artificial agents and trying to meet the following issue: Is it possible to find an equivalency between a communicative process and a learning process, to model and implement communication and learning as dual aspects of the same cognitive mechanism? Therefore, focus is on dialog as the only way for agents to acquire and revise knowledge, as it often happens in natural situations. This particular chapter concentrates on a learning situation where two agents, in a “teacher/student” relationship, exchange information with a learning incentive (on behalf of the student), according to a Socratic dialog. The teacher acts as the reliable knowledge source, and the student is an agent whose goal is to increase its knowledge base in an optimal way. The chapter first defines the nature of the addressed agents, the types of relation they maintain, and the structure and contents of their knowledge base. It emphasizes the symmetry between the interaction and knowledge management, by highlighting knowledge “repair” procedures launched through dialogic means. These procedures deal with misunderstanding, as a situation in which the student is unable to integrate new knowledge directly, and discussion, related to paradoxical information handling. The chapter describes learning goals and strategies, student and teacher roles within both dialog and knowledge handling. It also provides solutions for problems encountered by agents. A general architecture is then established and a comment on a part of the theory implementation is given. The conclusion is about the achievements carried out and the potential improvement of this work.


Author(s):  
Réal Carbonneau ◽  
Rustam Vahidov ◽  
Kevin Laframboise

Managing supply chains in today’s complex, dynamic, and uncertain environment is one of the key challenges affecting the success of the businesses. One of the crucial determinants of effective supply chain management is the ability to recognize customer demand patterns and react accordingly to the changes in face of intense competition. Thus the ability to adequately predict demand by the participants in a supply chain is vital to the survival of businesses. Demand prediction is aggravated by the fact that communication patterns between participants that emerge in a supply chain tend to distort the original consumer’s demand and create high levels of noise. Distortion and noise negatively impact forecast quality of the participants. This work investigates the applicability of machine learning (ML) techniques and compares their performances with the more traditional methods in order to improve demand forecast accuracy in supply chains. To this end we used two data sets from particular companies (chocolate manufacturer and toner cartridge manufacturer), as well as data from the Statistics Canada manufacturing survey. A representative set of traditional and ML-based forecasting techniques have been applied to the demand data and the accuracy of the methods was compared. As a group, Machine Learning techniques outperformed traditional techniques in terms of overall average, but not in terms of overall ranking. We also found that a support vector machine (SVM) trained on multiple demand series produced the most accurate forecasts.


Sign in / Sign up

Export Citation Format

Share Document