scholarly journals Blind windows

2018 ◽  
Vol 31 (5) ◽  
pp. 154-182
Author(s):  
Cadence Kinsey

This article analyses Camille Henrot’s 2013 film Grosse Fatigue in relation to the histories of hypermedia and modes of interaction with the World Wide Web. It considers the development of non-hierarchical systems for the organisation of information, and uses Grosse Fatigue to draw comparisons between the Web, the natural history museum and the archive. At stake in focusing on the way in which information is organised through hypermedia is the question of subjectivity, and this article argues that such systems are made ‘user-friendly’ by appearing to accommodate intuitive processes of information retrieval, reflecting the subject back to itself as autonomous. This produces an ideology of individualism which belies the forms of heteronomy that in fact shape and structure access to information online in significant ways. At the heart of this argument is an attention to the visual, and the significance of art as an immanent mode of analysis. Through the themes of transparency and opacity, and order and chaos, the article thus proposes a defining dynamic between autonomy and automation as a model for understanding the contemporary subject.

PMLA ◽  
2011 ◽  
Vol 126 (2) ◽  
pp. 448-454
Author(s):  
David Theo Goldberg

Toward the end of his opening lecture in the hermeneutics of the subject (17–19), Michel Foucault draws a distinction between ancient and modern modes of philosophizing. He bases the distinction on differing conceptions of determining the grounds of the true and the false and the subject's access to them. The distinction and shift are well-trodden ground, as we'll see, even if Foucault marks them in characteristically provocative ways. I will argue, though, that with the advent of the World Wide Web the distinction is incomplete, not least in regard to the religious implications of the three modes and their underlying theological considerations. The World Wide Web, in short, signals the emergence of a new way of being and thinking to rival the ancient and the modern, even as it draws on elements of both.


1998 ◽  
Vol 162 ◽  
pp. 68-73
Author(s):  
Jay M. Pasachoff

I discuss the burgeoning World Wide Web and how it can be used to aid astronomy teaching.I supply a list of a variety of useful Web sites.The World Wide Web was invented 5 years ago at CERN, which is now translated as the European Laboratory for Particle Physics, as a way of aiding access to information from remote sites. The invention of graphic interfaces, notably Mosaic by a group at the National Supercomputer Center in Illinois and then Netscape Navigator as a private development by many of the original Mosaic people, led to an explosion in use of the Web. Millions of people around the world are now able to access information from over 100,000 Web sites.There is much astronomical information on the Web, though that information make up only a small fraction of all the information available through this medium.


Author(s):  
Reinaldo Padilha França ◽  
Ana Carolina Borges Monteiro ◽  
Rangel Arthur ◽  
Yuzo Iano

Web 2.0 is the evolution of the web. Seen as a new and second movement of access to information through the world wide web, Web 2.0 brings interactivity and collaboration as the main keys to its functioning. It is now possible and simpler and faster to send information at any time, by any user connected to the internet. The ease of uploading information, images, and videos on the Web 2.0 is due to the expansion of resources and codes, allowing anyone to be able to act naturally and take their own content to the internet. As the data and information shared daily is almost infinite, the search engines act even more intuitively and bring only results tailored to each user. Therefore, this chapter aims to provide an updated review and overview of Web 2.0, addressing its evolution and fundamental concepts, showing its relationship, as well as approaching its success with a concise bibliographic background, categorizing and synthesizing the potential of technology.


Author(s):  
Anthony D. Andre

This paper provides an overview of the various human factors and ergonomics (HF/E) resources on the World Wide Web (WWW). A list of the most popular and useful HF/E sites will be provided, along with several critical guidelines relevant to using the WWW. The reader will gain a clear understanding of how to find HF/E information on the Web and how to successfully use the Web towards various HF/E professional consulting activities. Finally, we consider the ergonomic implications of surfing the Web.


2016 ◽  
Vol 28 (2) ◽  
pp. 241-251 ◽  
Author(s):  
Luciane Lena Pessanha Monteiro ◽  
Mark Douglas de Azevedo Jacyntho

The study addresses the use of the Semantic Web and Linked Data principles proposed by the World Wide Web Consortium for the development of Web application for semantic management of scanned documents. The main goal is to record scanned documents describing them in a way the machine is able to understand and process them, filtering content and assisting us in searching for such documents when a decision-making process is in course. To this end, machine-understandable metadata, created through the use of reference Linked Data ontologies, are associated to documents, creating a knowledge base. To further enrich the process, (semi)automatic mashup of these metadata with data from the new Web of Linked Data is carried out, considerably increasing the scope of the knowledge base and enabling to extract new data related to the content of stored documents from the Web and combine them, without the user making any effort or perceiving the complexity of the whole process.


2017 ◽  
Vol 4 (1) ◽  
pp. 95-110 ◽  
Author(s):  
Deepika Punj ◽  
Ashutosh Dixit

In order to manage the vast information available on web, crawler plays a significant role. The working of crawler should be optimized to get maximum and unique information from the World Wide Web. In this paper, architecture of migrating crawler is proposed which is based on URL ordering, URL scheduling and document redundancy elimination mechanism. The proposed ordering technique is based on URL structure, which plays a crucial role in utilizing the web efficiently. Scheduling ensures that URLs should go to optimum agent for downloading. To ensure this, characteristics of both agents and URLs are taken into consideration for scheduling. Duplicate documents are also removed to make the database unique. To reduce matching time, document matching is made on the basis of their Meta information only. The agents of proposed migrating crawler work more efficiently than traditional single crawler by providing ordering and scheduling of URLs.


2021 ◽  
Author(s):  
Michael Dick

Since it was first formally proposed in 1990 (and since the first website was launched in 1991), the World Wide Web has evolved from a collection of linked hypertext documents residing on the Internet, to a "meta-medium" featuring platforms that older media have leveraged to reach their publics through alternative means. However, this pathway towards the modernization of the Web has not been entirely linear, nor will it proceed as such. Accordingly, this paper problematizes the notion of "progress" as it relates to the online realm by illuminating two distinct perspectives on the realized and proposed evolution of the Web, both of which can be grounded in the broader debate concerning technological determinism versus the social construction of technology: on the one hand, the centralized and ontology-driven shift from a human-centred "Web of Documents" to a machine-understandable "Web of Data" or "Semantic Web", which is supported by the Web's inventor, Tim Berners-Lee, and the organization he heads, the World Wide Web Consortium (W3C); on the other, the decentralized and folksonomy-driven mechanisms through which individuals and collectives exert control over the online environment (e.g. through the social networking applications that have come to characterize the contemporary period of "Web 2.0"). Methodologically, the above is accomplished through a sustained exploration of theory derived from communication and cultural studies, which discursively weaves these two viewpoints together with a technical history of recent W3C projects. As a case study, it is asserted that the forward slashes contained in a Uniform Resource Identifier (URI) were a social construct that was eventually rendered extraneous by the end-user community. By focusing On the context of the technology itself, it is anticipated that this paper will contribute to the broader debate concerning the future of the Web and its need to move beyond a determinant "modernization paradigm" or over-arching ontology, as well as advance the potential connections that can be cultivated with cognate disciplines.


2017 ◽  
Vol 22 (1) ◽  
pp. 21-37 ◽  
Author(s):  
Matthew T. Mccarthy

The web of linked data, otherwise known as the semantic web, is a system in which information is structured and interlinked to provide meaningful content to artificial intelligence (AI) algorithms. As the complex interactions between digital personae and these algorithms mediate access to information, it becomes necessary to understand how these classification and knowledge systems are developed. What are the processes by which those systems come to represent the world, and how are the controversies that arise in their creation, overcome? As a global form, the semantic web is an assemblage of many interlinked classification and knowledge systems, which are themselves assemblages. Through the perspectives of global assemblage theory, critical code studies and practice theory, I analyse netnographic data of one such assemblage. Schema.org is but one component of the larger global assemblage of the semantic web, and as such is an emergent articulation of different knowledges, interests and networks of actors. This articulation comes together to tame the profusion of things, seeking stability in representation, but in the process, it faces and produces more instability. Furthermore, this production of instability contributes to the emergence of new assemblages that have similar aims.


Author(s):  
Punam Bedi ◽  
Neha Gupta ◽  
Vinita Jindal

The World Wide Web is a part of the Internet that provides data dissemination facility to people. The contents of the Web are crawled and indexed by search engines so that they can be retrieved, ranked, and displayed as a result of users' search queries. These contents that can be easily retrieved using Web browsers and search engines comprise the Surface Web. All information that cannot be crawled by search engines' crawlers falls under Deep Web. Deep Web content never appears in the results displayed by search engines. Though this part of the Web remains hidden, it can be reached using targeted search over normal Web browsers. Unlike Deep Web, there exists a portion of the World Wide Web that cannot be accessed without special software. This is known as the Dark Web. This chapter describes how the Dark Web differs from the Deep Web and elaborates on the commonly used software to enter the Dark Web. It highlights the illegitimate and legitimate sides of the Dark Web and specifies the role played by cryptocurrencies in the expansion of Dark Web's user base.


Web Services ◽  
2019 ◽  
pp. 1068-1076
Author(s):  
Vudattu Kiran Kumar

The World Wide Web (WWW) is global information medium, where users can read and write using computers over internet. Web is one of the services available on internet. The Web was created in 1989 by Sir Tim Berners-Lee. Since then a great refinement has done in the web usage and development of its applications. Semantic Web Technologies enable machines to interpret data published in a machine-interpretable form on the web. Semantic web is not a separate web it is an extension to the current web with additional semantics. Semantic technologies play a crucial role to provide data understandable to machines. To achieve machine understandable, we should add semantics to existing websites. With additional semantics, we can achieve next level web where knowledge repositories are available for better understanding of web data. This facilitates better search, accurate filtering and intelligent retrieval of data. This paper discusses about the Semantic Web and languages involved in describing documents in machine understandable format.


Sign in / Sign up

Export Citation Format

Share Document