Personalizing web information for patients: linking patient medical data with the web via a patient personal knowledge base

2006 ◽  
Vol 12 (1) ◽  
pp. 27-39 ◽  
Author(s):  
Asma Al-Busaidi ◽  
Alex Gray ◽  
Nick Fiddian
2016 ◽  
Vol 28 (2) ◽  
pp. 241-251 ◽  
Author(s):  
Luciane Lena Pessanha Monteiro ◽  
Mark Douglas de Azevedo Jacyntho

The study addresses the use of the Semantic Web and Linked Data principles proposed by the World Wide Web Consortium for the development of Web application for semantic management of scanned documents. The main goal is to record scanned documents describing them in a way the machine is able to understand and process them, filtering content and assisting us in searching for such documents when a decision-making process is in course. To this end, machine-understandable metadata, created through the use of reference Linked Data ontologies, are associated to documents, creating a knowledge base. To further enrich the process, (semi)automatic mashup of these metadata with data from the new Web of Linked Data is carried out, considerably increasing the scope of the knowledge base and enabling to extract new data related to the content of stored documents from the Web and combine them, without the user making any effort or perceiving the complexity of the whole process.


2021 ◽  
pp. 99-110
Author(s):  
Mohammad Ali Tofigh ◽  
◽  
◽  
Zhendong Mu

With the development of society, people pay more and more attention to the safety of food, and relevant laws and policies are gradually introduced and being improved. The research and development of agricultural product quality and safety system has become a research hot spot, and how to obtain the Web information of the system effectively and quickly is the focus of the research, so it is essential to carry out the intelligent extraction of Web information for agricultural product quality and safety system. The purpose of this paper is to solve the problem of how to efficiently extract the Web information of the agricultural product quality and safety system. By studying the Web information extraction methods of various systems, the paper makes a detailed analysis and research on how to realize the efficient and intelligent extraction of the Web information of the agricultural product quality and safety system. This paper analyzes in detail all kinds of template information extraction algorithms used at present, and systematically discusses a set of schemes that can automatically extract the Web information of agricultural product quality and safety system according to the template. The research results show that the proposed scheme is a dynamically extensible information extraction system, which can independently implement dynamic configuration templates according to different requirements without changing the code. Compared with the general way, the Web information extraction speed of agricultural product quality safety system is increased by 25%, the accuracy is increased by 12%, and the recall rate is increased by 30%.


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


Author(s):  
Antonio F. L. Jacob ◽  
Eulália C. da Mata ◽  
Ádamo L. Santana ◽  
Carlos R. L. Francês ◽  
João C. W. A. Costa ◽  
...  

The Web is providing greater freedom for users to create and obtain information in a more dynamic and appropriate way. One means of obtaining information on this platform, which complements or replaces other forms, is the use of conversation robots or Chatterbots. Several factors must be taken into account for the effective use of this technology; the first of which is the need to employ a team of professionals from various fields to build the knowledge base of the system and be provided with a wide range of responses, i.e. interactions. It is a multidisciplinary task to ensure that the use of this system can be targeted to children. In this context, this chapter carries out a study of the technology of Chatterbots and shows some of the changes that have been implemented for the effective use of this technology for children. It also highlights the need for a shift away from traditional methods of interaction so that an affective computing model can be implemented.


2004 ◽  
pp. 268-304 ◽  
Author(s):  
Grigorios Tsoumakas ◽  
Nick Bassiliades ◽  
Ioannis Vlahavas

This chapter presents the design and development of WebDisC, a knowledge-based web information system for the fusion of classifiers induced at geographically distributed databases. The main features of our system are: (i) a declarative rule language for classifier selection that allows the combination of syntactically heterogeneous distributed classifiers; (ii) a variety of standard methods for fusing the output of distributed classifiers; (iii) a new approach for clustering classifiers in order to deal with the semantic heterogeneity of distributed classifiers, detect their interesting similarities and differences, and enhance their fusion; and (iv) an architecture based on the Web services paradigm that utilizes the open and scalable standards of XML and SOAP.


2004 ◽  
pp. 227-267
Author(s):  
Wee Keong Ng ◽  
Zehua Liu ◽  
Zhao Li ◽  
Ee Peng Lim

With the explosion of information on the Web, traditional ways of browsing and keyword searching of information over web pages no longer satisfy the demanding needs of web surfers. Web information extraction has emerged as an important research area that aims to automatically extract information from target web pages and convert them into a structured format for further processing. The main issues involved in the extraction process include: (1) the definition of a suitable extraction language; (2) the definition of a data model representing the web information source; (3) the generation of the data model, given a target source; and (4) the extraction and presentation of information according to a given data model. In this chapter, we discuss the challenges of these issues and the approaches that current research activities have taken to revolve these issues. We propose several classification schemes to classify existing approaches of information extraction from different perspectives. Among the existing works, we focus on the Wiccap system — a software system that enables ordinary end-users to obtain information of interest in a simple and efficient manner by constructing personalized web views of information sources.


Sign in / Sign up

Export Citation Format

Share Document