scholarly journals A Fuzzy Linguistic Multi-agent Model for Information Gathering on the Web Based on Collaborative Filtering Techniques

Author(s):  
Enrique Herrera-Viedma ◽  
Carlos Porcel ◽  
Antonio Gabriel López ◽  
María Dolores Olvera ◽  
Karina Anaya
2004 ◽  
Vol 148 (1) ◽  
pp. 61-83 ◽  
Author(s):  
E. Herrera-Viedma ◽  
F. Herrera ◽  
L. Martı́nez ◽  
J.C. Herrera ◽  
A.G. López

Author(s):  
David Camacho

The last decade has shown the e-business community and computer science researchers that there can be serious problems and pitfalls when e-companies are created. One of the problems is related to the necessity for the management of knowledge (data, information, or other electronic resources) from different companies. This chapter will focus on two important research fields that are currently working to solve this problem — Information Gathering (IG) techniques and Web-enabled Agent technologies. IG techniques are related to the problem of retrieval, extraction and integration of data from different (usually heterogeneous) sources into new forms. Agent and Multi-Agent technologies have been successfully applied in domains such as the Web. This chapter will show, using a specific IG Multi-Agent system called MAPWeb, how information gathering techniques have been successfully combined with agent technologies to build new Web agent-based systems. These systems can be migrated into Business-to-


2009 ◽  
pp. 781-799
Author(s):  
David Camacho

The last decade has shown the e-business community and computer science researchers that there can be serious problems and pitfalls when e-companies are created. One of the problems is related to the necessity for the management of knowledge (data, information, or other electronic resources) from different companies. This chapter will focus on two important research fields that are currently working to solve this problem — Information Gathering (IG) techniques and Web-enabled Agent technologies. IG techniques are related to the problem of retrieval, extraction and integration of data from different (usually heterogeneous) sources into new forms. Agent and Multi-Agent technologies have been successfully applied in domains such as the Web. This chapter will show, using a specific IG Multi-Agent system called MAPWeb, how information gathering techniques have been successfully combined with agent technologies to build new Web agent-based systems. These systems can be migrated into Business- to-Consumer (B2C) scenarios using several technologies related to the Semantic Web, such as SOAP, UDDI or Web services.


Author(s):  
Satrio Wicaksono Sudarman ◽  
Ira Vahlia

This study aims to determine the benefits of teaching materials based on the application of Schoology in the course of trigonometry in the lecture. One of the hallmarks of Schoology-based teaching materials is to use the Schoology app which is one of the web-based social web pages which offers the same learning as in a class free and easy to use. Through Schoology, the management of learning is very easy. Schoology is also almost the same function with other web pages in which it offers lecturers to load materials, learning materials, structured quizzes. This research is a kind of research development. Research methodology used as follows: (1) research and information gathering, (2) planning, (3) product development, (4) initial test, (5) product revision, (6) field trial, (7) field trial products, (8) product operational trials, (9) product revisions and (10) implementation. The achievement of this research is teaching materials based on Schoology which can support lecturer of trigonometry in the mathematics education program. This teaching material is feasible to use and the achievements generated in the development of teaching materials based on this schoology is to improve students' learning outcomes in trigonometric courses.Keywords: development,  teaching materials, schoology, trigonometry


2008 ◽  
pp. 469-484
Author(s):  
David Camacho ◽  
Ricardo Aler ◽  
Juan Cuadrado

How to build intelligent robust applications that work with the information stored in the Web is a difficult problem for several reasons which arise from the essential nature of the Web: the information is highly distributed, it is dynamic (both in content and format), it is not usually correctly structured, and the web sources will be unreachable at some times. To build robust and adaptable web systems, it is necessary to provide a standard representation for the information (i.e., using languages such as XML and ontologies to represent the semantics of the stored knowledge). However, this is actually a research field and usually most web sources do not provide their information in a structured way. This chapter analyzes a new approach that allows us to build robust and adaptable web systems by using a multi-agent approach. Several problems, including how to retrieve, extract, and manage the stored information from web sources, are analyzed from an agent perspective. Two difficult problems will be addressed in this chapter: designing a general architecture to deal with the problem of managing web information sources; and how these agents could work semiautomatically, adapting their behaviors to the dynamic conditions of the electronic sources. To achieve the first goal, a generic web-based multi-agent system (MAS) will be proposed, and will be applied in a specific problem to retrieve and manage information from electronic newspapers. To partially solve the problem of retrieving and extracting web information, a semiautomatic web parser will be designed and deployed like a reusable software component. This parser uses two sets of rules to adapt the behavior of the web agent to possible changes in the web sources. The first one is used to define the knowledge to be extracted from the HTML pages; the second one represents the final structure to store the retrieved knowledge. Using this parser, a specific web-based multi-agent system will be implemented.


Author(s):  
David Camacho ◽  
Ricardo Aler ◽  
Juan Cuadrado

How to build intelligent robust applications that work with the information stored in the Web is a difficult problem for several reasons which arise from the essential nature of the Web: the information is highly distributed, it is dynamic (both in content and format), it is not usually correctly structured, and the web sources will be unreachable at some times. To build robust and adaptable web systems, it is necessary to provide a standard representation for the information (i.e., using languages such as XML and ontologies to represent the semantics of the stored knowledge). However, this is actually a research field and usually most web sources do not provide their information in a structured way. This chapter analyzes a new approach that allows us to build robust and adaptable web systems by using a multi-agent approach. Several problems, including how to retrieve, extract, and manage the stored information from web sources, are analyzed from an agent perspective. Two difficult problems will be addressed in this chapter: designing a general architecture to deal with the problem of managing web information sources; and how these agents could work semiautomatically, adapting their behaviors to the dynamic conditions of the electronic sources. To achieve the first goal, a generic web-based multi-agent system (MAS) will be proposed, and will be applied in a specific problem to retrieve and manage information from electronic newspapers. To partially solve the problem of retrieving and extracting web information, a semiautomatic web parser will be designed and deployed like a reusable software component. This parser uses two sets of rules to adapt the behavior of the web agent to possible changes in the web sources. The first one is used to define the knowledge to be extracted from the HTML pages; the second one represents the final structure to store the retrieved knowledge. Using this parser, a specific web-based multi-agent system will be implemented.


Sign in / Sign up

Export Citation Format

Share Document