Social Computing and Social Software

Author(s):  
Ben Kei Daniel

The World Wide Web is one of the most profound technological inventions of our time and is the core to the development of social computing. The initial purpose of the Web was to use networked hypertext system to facilitate communication among its scientists and researchers, who were located in several countries. With the invention of the Web came three important goals. The first was aimed at ensuring the availability of different technologies to improve communication and engagement. The second goal was to make the Web an interactive medium that can engage individuals as well as enrich communities’ activities. The third goal was for the Web to create a more intelligent Web, in addition to being a space browseable by humans. The Web was developed to be rich in data, promoting community engagement, and encouraging mass participation and information sharing. This Chapter describes general trends linked to the development of the World Wide Web and discusses its related technologies within the milieu of virtual communities. The goal is to provide the reader with a quick, concise and easy way to understand the development of the Web and its related terminologies. The Chapter does not account for a more comprehensive analysis of historical trends associated with the development of the Web; neither does it go into a more detailed technical discussion of Web technologies. Nonetheless, it is anticipated that the materials presented in the Chapter are sufficient to provide the reader with a better understanding of the past, present and future accounts of the Web and its core related technologies.

2017 ◽  
Author(s):  
Yi Liu ◽  
Kwangjo Kim

Since 2004 the term “Web 2.0” has generated a revolution on the World Wide Web and it has developed new ideas, services, application to improve and facilitate communications through the web. Technologies associated with the second-generation of the World Wide Web enable virtually anyone to share their data, documents, observations, and opinions on the Internet. The serious applications of Web 2.0 are sparse and this paper assesses its use in the context of applications, reflections, and collaborative spatial decision-making based on Web generations and in a particular Web 2.0.


2006 ◽  
Vol 30 (4) ◽  
Author(s):  
Ganaele M. Langlois

Abstract: This paper calls for a cultural analysis of the World Wide Web through a focus on the technical layers that shape the Web as a medium. Web technologies need to be acknowledged as not only shaped by social processes, but also deployed, through standardization and automation, as agents that render representation possible by following specific technocultural logics. Thus, the focus is on the relationship between hardware, software, and culture through an analysis of the layers of encoding that translate data into discourse, and vice versa. Résumé : Cet article appelle au développement d’une analyse culturelle du Web axée sur les couches technologiques qui transforment le Web en média. Les technologies Web ne doivent pas seulement être reconnues comme étant façonnées par des processus sociaux, mais aussi comme étant déployées, au travers de phénomènes de standardisation et d’automatisation, comme agents qui rendent toute représentation possible en suivant des logiques technoculturelles spécifiques. Ainsi, la priorité est donnée aux relations entre le matériel informatique, les logiciels et les processus culturels au travers d’une analyse des couches de codage qui traduisent les données informatiques en discours, et vice-versa.


2017 ◽  
Author(s):  
Yi Liu ◽  
Kwangjo Kim

Since 2004 the term “Web 2.0” has generated a revolution on the World Wide Web and it has developed new ideas, services, application to improve and facilitate communications through the web. Technologies associated with the second-generation of the World Wide Web enable virtually anyone to share their data, documents, observations, and opinions on the Internet. The serious applications of Web 2.0 are sparse and this paper assesses its use in the context of applications, reflections, and collaborative spatial decision-making based on Web generations and in a particular Web 2.0.


Author(s):  
Anthony D. Andre

This paper provides an overview of the various human factors and ergonomics (HF/E) resources on the World Wide Web (WWW). A list of the most popular and useful HF/E sites will be provided, along with several critical guidelines relevant to using the WWW. The reader will gain a clear understanding of how to find HF/E information on the Web and how to successfully use the Web towards various HF/E professional consulting activities. Finally, we consider the ergonomic implications of surfing the Web.


2017 ◽  
Vol 4 (1) ◽  
pp. 95-110 ◽  
Author(s):  
Deepika Punj ◽  
Ashutosh Dixit

In order to manage the vast information available on web, crawler plays a significant role. The working of crawler should be optimized to get maximum and unique information from the World Wide Web. In this paper, architecture of migrating crawler is proposed which is based on URL ordering, URL scheduling and document redundancy elimination mechanism. The proposed ordering technique is based on URL structure, which plays a crucial role in utilizing the web efficiently. Scheduling ensures that URLs should go to optimum agent for downloading. To ensure this, characteristics of both agents and URLs are taken into consideration for scheduling. Duplicate documents are also removed to make the database unique. To reduce matching time, document matching is made on the basis of their Meta information only. The agents of proposed migrating crawler work more efficiently than traditional single crawler by providing ordering and scheduling of URLs.


2021 ◽  
Author(s):  
Michael Dick

Since it was first formally proposed in 1990 (and since the first website was launched in 1991), the World Wide Web has evolved from a collection of linked hypertext documents residing on the Internet, to a "meta-medium" featuring platforms that older media have leveraged to reach their publics through alternative means. However, this pathway towards the modernization of the Web has not been entirely linear, nor will it proceed as such. Accordingly, this paper problematizes the notion of "progress" as it relates to the online realm by illuminating two distinct perspectives on the realized and proposed evolution of the Web, both of which can be grounded in the broader debate concerning technological determinism versus the social construction of technology: on the one hand, the centralized and ontology-driven shift from a human-centred "Web of Documents" to a machine-understandable "Web of Data" or "Semantic Web", which is supported by the Web's inventor, Tim Berners-Lee, and the organization he heads, the World Wide Web Consortium (W3C); on the other, the decentralized and folksonomy-driven mechanisms through which individuals and collectives exert control over the online environment (e.g. through the social networking applications that have come to characterize the contemporary period of "Web 2.0"). Methodologically, the above is accomplished through a sustained exploration of theory derived from communication and cultural studies, which discursively weaves these two viewpoints together with a technical history of recent W3C projects. As a case study, it is asserted that the forward slashes contained in a Uniform Resource Identifier (URI) were a social construct that was eventually rendered extraneous by the end-user community. By focusing On the context of the technology itself, it is anticipated that this paper will contribute to the broader debate concerning the future of the Web and its need to move beyond a determinant "modernization paradigm" or over-arching ontology, as well as advance the potential connections that can be cultivated with cognate disciplines.


Author(s):  
Punam Bedi ◽  
Neha Gupta ◽  
Vinita Jindal

The World Wide Web is a part of the Internet that provides data dissemination facility to people. The contents of the Web are crawled and indexed by search engines so that they can be retrieved, ranked, and displayed as a result of users' search queries. These contents that can be easily retrieved using Web browsers and search engines comprise the Surface Web. All information that cannot be crawled by search engines' crawlers falls under Deep Web. Deep Web content never appears in the results displayed by search engines. Though this part of the Web remains hidden, it can be reached using targeted search over normal Web browsers. Unlike Deep Web, there exists a portion of the World Wide Web that cannot be accessed without special software. This is known as the Dark Web. This chapter describes how the Dark Web differs from the Deep Web and elaborates on the commonly used software to enter the Dark Web. It highlights the illegitimate and legitimate sides of the Dark Web and specifies the role played by cryptocurrencies in the expansion of Dark Web's user base.


Web Services ◽  
2019 ◽  
pp. 1068-1076
Author(s):  
Vudattu Kiran Kumar

The World Wide Web (WWW) is global information medium, where users can read and write using computers over internet. Web is one of the services available on internet. The Web was created in 1989 by Sir Tim Berners-Lee. Since then a great refinement has done in the web usage and development of its applications. Semantic Web Technologies enable machines to interpret data published in a machine-interpretable form on the web. Semantic web is not a separate web it is an extension to the current web with additional semantics. Semantic technologies play a crucial role to provide data understandable to machines. To achieve machine understandable, we should add semantics to existing websites. With additional semantics, we can achieve next level web where knowledge repositories are available for better understanding of web data. This facilitates better search, accurate filtering and intelligent retrieval of data. This paper discusses about the Semantic Web and languages involved in describing documents in machine understandable format.


2011 ◽  
pp. 203-212
Author(s):  
Luis V. Casaló ◽  
Carlos Flavián ◽  
Miguel Guinalíu

Individuals are increasingly turning to computermediated communication in order to get information on which to base their decisions. For instance, many consumers are using newsgroups, chat rooms, forums, e-mail list servers, and other online formats to share ideas, build communities and contact other consumers who are seen as more objective information sources (Kozinets, 2002). These social groups have been traditionally called virtual communities. The virtual community concept is almost as old as the concept of Internet. However, the exponential development of these structures occurred during the nineties (Flavián & Guinalíu, 2004) due to the appearance of the World Wide Web and the spreading of other Internet tools such as e-mail or chats. The justification of this expansion is found in the advantages generated by the virtual communities to both the members and the organizations that create them.


Author(s):  
August-Wilhelm Scheer

The emergence of what we call today the World Wide Web, the WWW, or simply the Web, dates back to 1989 when Tim Berners-Lee proposed a hypertext system to manage information overload at CERN, Switzerland (Berners-Lee, 1989). This article outlines how his approaches evolved into the Web that drives today’s information society and explores its full potentials still ahead. The formerly known wide-area hypertext information retrieval initiative quickly gained momentum due to the fast adoption of graphical browser programs and standardization activities of the World Wide Web Consortium (W3C). In the beginning, based only on the standards of HTML, HTTP, and URL, the sites provided by the Web were static, meaning the information stayed unchanged until the original publisher decided for an update. For a long time, the WWW, today referred to as Web 1.0, was understood as a technical mean to publish information to a vast audience across time and space. Data was kept locally and Web sites were only occasionally updated by uploading files from the client to the Web server. Application software was limited to local desktops and operated only on local data. With the advent of dynamic concepts on server-side (script languages like hypertext preprocessor (PHP) or Perl and Web applications with JSP or ASP) and client-side (e.g., JavaScript), the WWW became more dynamic. Server-side content management systems (CMS) allowed editing Web sites via the browser during run-time. These systems interact with multiple users through PHP-interfaces that push information into server-side databases (e.g., mySQL) which again feed Web sites with content. Thus, the Web became accessible and editable not only for programmers and “techies” but also for the common user. Yet, technological limitations such as slow Internet connections, consumer-unfriendly Internet rates, and poor multimedia support still inhibited a mass-usage of the Web. It needed broad-band Internet access, flat rates, and digitalized media processing to catch on.


Sign in / Sign up

Export Citation Format

Share Document