In Defense of Ambiguity Redux

Author(s):  
Patrick J. Hayes ◽  
Harry Halpin

URIs, a universal identification scheme, are different from human names insofar as they can provide the ability to reliably access the thing identified. URIs also can function to reference a non-accessible thing in a similar manner to how names function in natural language. There are two distinctly different relationships between names and things: access and reference. To confuse the two relations leads to underlying problems with Web architecture. Reference is by nature ambiguous in any language. So any attempts by Web architecture to make reference completely unambiguous will fail on the Web. Despite popular belief otherwise, making further ontological distinctions often leads to more ambiguity, not less. Contrary to appeals to Kripke for some sort of eternal and unique identification, reference on the Web uses descriptions and therefore there is no unambiguous resolution of reference. On the Web, what is needed is not just a simple redirection, but a uniform and logically consistent manner of associating descriptions with URIs that can be done in a number of practical ways that should be made consistent.

2015 ◽  
Vol 64 (1/2) ◽  
pp. 82-100 ◽  
Author(s):  
Michael Calaresu ◽  
Ali Shiri

Purpose – The purpose of this article is to explore and conceptualize the Semantic Web as a term that has been widely mentioned in the literature of library and information science. More specifically, its aim is to shed light on the evolution of the Web and to highlight a previously proposed means of attempting to improve automated manipulation of Web-based data in the context of a rapidly expanding base of both users and digital content. Design/methodology/approach – The conceptual analysis presented in this paper adopts a three-dimensional model for the discussion of Semantic Web. The first dimension focuses on Semantic Web’s basic nature, purpose and history, as well as the current state and limitations of modern search systems and related software agents. The second dimension focuses on critical knowledge structures such as taxonomies, thesauri and ontologies which are understood as fundamental elements in the creation of a Semantic Web architecture. In the third dimension, an alternative conceptual model is proposed, one, which unlike more commonly prevalent Semantic Web models, offers a greater emphasis on describing the proposed structure from an interpretive viewpoint, rather than a technical one. This paper adopts an interpretive, historical and conceptual approach to the notion of the Semantic Web by reviewing the literature and by analyzing the developments associated with the Web over the past three decades. It proposes a simplified conceptual model for easy understanding. Findings – The paper provides a conceptual model of the Semantic Web that encompasses four key strata, namely, the body of human users, the body of software applications facilitating creation and consumption of documents, the body of documents themselves and a proposed layer that would improve automated manipulation of Web-based data by the software applications. Research limitations/implications – This paper will facilitate a better conceptual understanding of the Semantic Web, and thereby contribute, in a small way, to the larger body of discourse surrounding it. The conceptual model will provide a reference point for education and research purposes. Originality/value – This paper provides an original analysis of both conceptual and technical aspects of Semantic Web. The proposed conceptual model provides a new perspective on this subject.


Designs ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 42
Author(s):  
Eric Lazarski ◽  
Mahmood Al-Khassaweneh ◽  
Cynthia Howard

In recent years, disinformation and “fake news” have been spreading throughout the internet at rates never seen before. This has created the need for fact-checking organizations, groups that seek out claims and comment on their veracity, to spawn worldwide to stem the tide of misinformation. However, even with the many human-powered fact-checking organizations that are currently in operation, disinformation continues to run rampant throughout the Web, and the existing organizations are unable to keep up. This paper discusses in detail recent advances in computer science to use natural language processing to automate fact checking. It follows the entire process of automated fact checking using natural language processing, from detecting claims to fact checking to outputting results. In summary, automated fact checking works well in some cases, though generalized fact checking still needs improvement prior to widespread use.


2018 ◽  
Vol 25 (2) ◽  
pp. 287-306 ◽  
Author(s):  
Cleiton Fernando Lima Sena ◽  
Daniela Barreiro Claro

AbstractNowadays, there is an increasing amount of digital data. In the case of the Web, daily, a vast collection of data is generated, whose contents are heterogeneous. A significant portion of this data is available in a natural language format. Open Information Extraction (Open IE) enables the extraction of facts from large quantities of texts written in natural language. In this work, we propose an Open IE method to extract facts from texts written in Portuguese. We developed two new rules that generalize the inference by transitivity and by symmetry. Consequently, this approach increases the number of implicit facts in a sentence. Our novel symmetric inference approach is based on a list of symmetric features. Our results confirmed that our method outstands close works both in precision and number of valid extractions. Considering the number of minimal facts, our approach is equivalent to the most relevant methods in the literature.


Author(s):  
Alison Harcourt ◽  
George Christou ◽  
Seamus Simpson

This chapter explains one of the most important components of the web: the development and standardization of Hypertext Markup Language (HTML) and DOM (Document Object Model) which are used for creating web pages and applications. In 1994, Tim Berners-Lee established the World Wide Web consortium (W3C) to work on HTML development. In 1995, the W3C decided to introduce a new standard, WHTML 2.0. However, it was incompatible with the older HTML/WHTML versions. This led to the establishment of Web Hypertext Application Technology Working Group (WHATWG) which worked externally to the W3C. WHATWG developed HTML5 which was adopted by the major browser developers Google, Opera, Mozilla, IBM, Microsoft, and Apple. For this reason, the W3C decided to work on HTML5, leading to a joint WHATWG/W3C working group. This chapter explains the development of HTML and WHATWG’s Living Standard with explanation of ongoing splits and agreements between the two fora. It explains how this division of labour led to W3C focus on the main areas of web architecture, the semantic web, the web of devices, payments applications, and web and television (TV) standards. This has led to the spillover of work to the W3C from the national sphere, notably in the development of copyright protection for TV streaming.


2002 ◽  
Vol 53 (5) ◽  
pp. 359-364 ◽  
Author(s):  
Dragomir R. Radev ◽  
Kelsey Libner ◽  
Weiguo Fan
Keyword(s):  

2009 ◽  
Vol 34 ◽  
pp. 339-389 ◽  
Author(s):  
Y. Li ◽  
P. Musilek ◽  
M. Reformat ◽  
L. Wyard-Scott

In a significant minority of cases, certain pronouns, especially the pronoun it, can be used without referring to any specific entity. This phenomenon of pleonastic pronoun usage poses serious problems for systems aiming at even a shallow understanding of natural language texts. In this paper, a novel approach is proposed to identify such uses of it: the extrapositional cases are identified using a series of queries against the web, and the cleft cases are identified using a simple set of syntactic rules. The system is evaluated with four sets of news articles containing 679 extrapositional cases as well as 78 cleft constructs. The identification results are comparable to those obtained by human efforts.


2016 ◽  
pp. 073-088
Author(s):  
J.V. Rogushina ◽  

The paper analyzes the problems of search personalization of information resources and information objects which is based on the construction and use of user task thesaurus. This thesaurus allows the use of knowledge about search domain and structure of information objects represented by some appropriate ontologies. The definitions of semantic search, its subjects and components allow more articulate issues related to the information retrieval in the Web open environment. Software implementation of the proposed approach confirms the effectiveness of its prac-tical use.


Sign in / Sign up

Export Citation Format

Share Document