On the Use of Fuzzy Logic in Electronic Marketplaces

Author(s):  
Kostas Kolomvatsos ◽  
Stathes Hadjiefthymiades

Today, there is a large number of product providers in the Web. Electronic Marketplaces (EMs) enable entities to negotiate and trade products. Usually, intelligent agents assume the responsibility of representing buyers or sellers in EMs. However, uncertainty about the characteristics and intentions of the negotiating entities is present in these scenarios. Fuzzy Logic (FL) theory presents a lot of advantages when used in environments where entities have limited or no knowledge about their peers. Hence, entities can rely on a FL knowledge base that determines the appropriate action on every possible state. FL can be used in offers, trust, or time constraints definition or when an agent should decide during the negotiation process. The autonomic nature of agents in combination with FL leads to more efficient systems. In this chapter, the authors provide a critical review on the adoption of FL in marketplace systems and present their proposal for the buyer side. Moreover, the authors describe techniques for building FL systems focusing on clustering techniques. The aim is to show the importance of FL adoption in such settings.

2016 ◽  
Vol 28 (2) ◽  
pp. 241-251 ◽  
Author(s):  
Luciane Lena Pessanha Monteiro ◽  
Mark Douglas de Azevedo Jacyntho

The study addresses the use of the Semantic Web and Linked Data principles proposed by the World Wide Web Consortium for the development of Web application for semantic management of scanned documents. The main goal is to record scanned documents describing them in a way the machine is able to understand and process them, filtering content and assisting us in searching for such documents when a decision-making process is in course. To this end, machine-understandable metadata, created through the use of reference Linked Data ontologies, are associated to documents, creating a knowledge base. To further enrich the process, (semi)automatic mashup of these metadata with data from the new Web of Linked Data is carried out, considerably increasing the scope of the knowledge base and enabling to extract new data related to the content of stored documents from the Web and combine them, without the user making any effort or perceiving the complexity of the whole process.


2020 ◽  
Vol 1 (2) ◽  
pp. 105-110
Author(s):  
Siti Maysaroh Saragih ◽  
Ayu Lestari ◽  
Mahadir Soleh Hutasuhut

In a company, salary is a salary for employees who have been working for a month. However, in order to provide fair salary to all employees, the company must determine the criteria for providing salary. By using fuzzy logic, wages can be determined by going through the following stages: Fuzzification, Formation of the knowledge base, Fuzzy Inference, and Defuzzification. One of the fuzzy logic methods that can be used is the Tsukamoto method, where this method has an output in the form of firm values. To determine the salary, the data is collected from the Central Statistics Agency website in accordance with the criteria to be examined. With this research, employers can use the calculations from this research to determine salary salaries for their employees quickly, well, and precisely. So that the problem of determining the wages of their employees' salaries can be resolved properly.


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


Author(s):  
Antonio F. L. Jacob ◽  
Eulália C. da Mata ◽  
Ádamo L. Santana ◽  
Carlos R. L. Francês ◽  
João C. W. A. Costa ◽  
...  

The Web is providing greater freedom for users to create and obtain information in a more dynamic and appropriate way. One means of obtaining information on this platform, which complements or replaces other forms, is the use of conversation robots or Chatterbots. Several factors must be taken into account for the effective use of this technology; the first of which is the need to employ a team of professionals from various fields to build the knowledge base of the system and be provided with a wide range of responses, i.e. interactions. It is a multidisciplinary task to ensure that the use of this system can be targeted to children. In this context, this chapter carries out a study of the technology of Chatterbots and shows some of the changes that have been implemented for the effective use of this technology for children. It also highlights the need for a shift away from traditional methods of interaction so that an affective computing model can be implemented.


Author(s):  
Anu Sharma ◽  
Aarti Singh

Intelligent semantic approaches (i.e., semantic web and software agents) are very useful technologies for adding meaning to the web. Adaptive web is a new era of web targeting to provide customized and personalized view of contents and services to its users. Integration of these two technologies can further add to reasoning and intelligence in recommendation process. This chapter explores the existing work done in the area of applying intelligent approaches to web personalization and highlighting ample scope for application of intelligent agents in this domain for solving many existing issues like personalized content management, user profile learning, modelling, and adaptive interactions with users.


Author(s):  
Martha Garcia-Murillo ◽  
Paula Maxwell ◽  
Simon Boyce ◽  
Raymond St. Denis ◽  
William Bistline

This case focuses on the challenges of managing a help desk that supports computer users. There are two main technologies that the Information Center (IC) uses to provide this service: the call distributing system and the knowledge base, which is also available on the Web. The choice of technologies affected the service provided by the help desk staff. Specifically, the call distributing system was unable to provide enough information regarding the number of calls answered, dropped, and allocated among the different staff members. The hospital knowledge base, on the other hand, is created based on peoples documentation of the problem and selection of keywords, which has led to inconsistencies in the data entry. One of the management challenges for the Information Center is to foster self-help and minimize the number of requests to the IC staff. This case presents the difficulties and some of the initiatives that the IC has considered to solve these problems.


Sign in / Sign up

Export Citation Format

Share Document