Stigmergic Hyperlink

Author(s):  
Artur Sancho Marques ◽  
José Figueiredo

Inspired by patterns of behavior generated in social networks, a prototype of a new object was designed and developed for the World Wide Web – the stigmergic hyperlink or “stigh”. In a system of stighs, like a Web page, the objects that users do use grow “healthier”, while the unused “weaken”, eventually to the extreme of their “death”, being autopoieticaly replaced by new destinations. At the single Web page scale, these systems perform like recommendation systems and embody an “ecological” treatment to unappreciated links. On the much wider scale of generalized usage, because each stigh has a method to retrieve information about its destination, Web agents in general and search engines in particular, would have the option to delegate the crawling and/or the parsing of the destination. This would be an interesting social change: after becoming not only consumers, but also content producers, Web users would, just by hosting (automatic) stighs, become information service providers too.

Author(s):  
Artur Sancho Marques ◽  
José Figueiredo

Inspired by patterns of behavior generated in social networks, a prototype of a new object was designed and developed for the World Wide Web – the stigmergic hyperlink or “stigh”. In a system of stighs, like a Web page, the objects that users do use grow “healthier”, while the unused “weaken”, eventually to the extreme of their “death”, being autopoieticaly replaced by new destinations. At the single Web page scale, these systems perform like recommendation systems and embody an “ecological” treatment to unappreciated links. On the much wider scale of generalized usage, because each stigh has a method to retrieve information about its destination, Web agents in general and search engines in particular, would have the option to delegate the crawling and/or the parsing of the destination. This would be an interesting social change: after becoming not only consumers, but also content producers, Web users would, just by hosting (automatic) stighs, become information service providers too.


2002 ◽  
Vol 7 (1) ◽  
pp. 9-25 ◽  
Author(s):  
Moses Boudourides ◽  
Gerasimos Antypas

In this paper we are presenting a simple simulation of the Internet World-Wide Web, where one observes the appearance of web pages belonging to different web sites, covering a number of different thematic topics and possessing links to other web pages. The goal of our simulation is to reproduce the form of the observed World-Wide Web and of its growth, using a small number of simple assumptions. In our simulation, existing web pages may generate new ones as follows: First, each web page is equipped with a topic concerning its contents. Second, links between web pages are established according to common topics. Next, new web pages may be randomly generated and subsequently they might be equipped with a topic and be assigned to web sites. By repeated iterations of these rules, our simulation appears to exhibit the observed structure of the World-Wide Web and, in particular, a power law type of growth. In order to visualise the network of web pages, we have followed N. Gilbert's (1997) methodology of scientometric simulation, assuming that web pages can be represented by points in the plane. Furthermore, the simulated graph is found to possess the property of small worlds, as it is the case with a large number of other complex networks.


Author(s):  
Graham Cormode ◽  
Balachander Krishnamurthy

Web 2.0 is a buzzword introduced in 2003-04 which is commonly used to encompass various novel phenomena on the World Wide Web. Although largely a marketing term, some of the key attributes associated with Web 2.0 include the growth of social networks, bi-directional communication, various 'glue' technologies, and significant diversity in content types. We are not aware of a technical comparison between Web 1.0 and 2.0. While most of Web 2.0 runs on the same substrate as 1.0, there are some key differences. We capture those differences and their implications for technical work in this paper. Our goal is to identify the primary differences leading to the properties of interest in 2.0 to be characterized. We identify novel challenges due to the different structures of Web 2.0 sites, richer methods of user interaction, new technologies, and fundamentally different philosophy. Although a significant amount of past work can be reapplied, some critical thinking is needed for the networking community to analyze the challenges of this new and rapidly evolving environment.


1995 ◽  
Vol 4 (4) ◽  
pp. 219-227 ◽  
Author(s):  
Mark Papiani ◽  
Anthony J. G. Hey ◽  
Roger W. Hockney

Unlike single-processor benchmarks, multiprocessor benchmarks can yield tens of numbers for each benchmark on each computer, as factors such as the number of processors and problem size are varied. A graphical display of performance surfaces therefore provides a satisfactory way of comparing results. The University of Southampton has developed the Graphical Benchmark Information Service (GBIS) on the World Wide Web (WWW) to display interactively graphs of user-selected benchmark results from the GENESIS and PARKBENCH benchmark suites.


Author(s):  
Carmine Sellitto

This chapter provides an overview of some of the criteria that are currently being used to assess medical information found on the World Wide Web (WWW). Drawing from the evaluation frameworks discussed, a simple set of easy to apply criteria is proposed for evaluating on-line medical information. The criterion covers the categories of information accuracy, objectivity, privacy, currency and authority. A checklist for web page assessment and scoring is also proposed, providing an easy to use tool for medical professionals, health consumers and medical web editors.


Author(s):  
Antonis Sidiropoulos ◽  
Dimitrios Katsaros ◽  
Yannis Manolopoulos

The World Wide Web, or simply Web, is a characteristic example of a social network (Newman, 2003; Wasserman & Faust, 1994). Other examples of social networks include the food web network, scientific collaboration networks, sexual relationships networks, metabolic networks, and air transportation networks. Socials networks are usually abstracted as graphs, comprised by vertices, edges (directed or not), and in some cases, with weights on these edges. Social network theory is concerned with properties related to connectivity (degree, structure, centrality), distances (diameter, shortest paths), “resilience” (geodesic edges or vertices, articulation vertices) of these graphs, models of network growth. Social networks have been studied long before the conception of the Web. Pioneering works for the characterization of the Web as a social network and for the study of its basic properties are due to the work of Barabasi and its colleagues (Albert, Jeong & Barabasi, 1999). Later, several studies investigated other aspects like its growth (Bianconi & Barabasi, 2001; Menczer, 2004; Pennock, Flake, Lawrence, Glover, & Giles, 2002; Watts & Strogatz, 1998), its “small-world” nature in that pages can reach other pages with only a small number of links, and its scale-free nature (Adamic & Huberman, 2000; Barabasi & Albert, 1999; Barabasi & Bonabeau, 2003) (i.e., a feature implying that it is dominated by a relatively small number of Web pages that are connected to many others; these pages are called hubs and have a seemingly unlimited number of hyperlinks). Thus, the distribution of Web page linkages follows a power law in that most nodes have just a few hyperlinks and some have a tremendous number of links In that sense, the system has no “scale” (see Figure 1).


Percurso ◽  
2019 ◽  
Vol 1 (28) ◽  
pp. 156
Author(s):  
Carina PESCAROLO ◽  
Marina ZAGONEL

RESUMOPartindo do notório avanço da tecnologia e do crescente uso das redes sociais para os mais variados fins na sociedade da informação, a presente pesquisa busca analisar em que medida a informação, referente a um indivíduo ou os dados por ele fornecidos nos mais variados domínios virtuais, podem ser livremente difundidos na rede mundial de computadores. Para tanto, analisam-se os conceitos de dignidade humana e os direitos da personalidade como elementos balizadores das condutas virtuais, tanto na internet quanto nas redes sociais. PALAVRAS-CHAVES: Sociedade da Informação; Redes Sociais; Dignidade Humana; Direitos da Personalidade. ABSTRACTStarting from the notorious advancement of technology and the increasing use of social networks for the most varied purposes in the information society, the present research seeks to analyze to what extent the information, referring to an individual or the data provided by him in the most varied virtual domains, can be freely disseminated on the world wide web. For that, the concepts of human dignity and personality rights are analyzed as guiding elements of virtual behaviors, both on the Internet and in social networks. KEYWORDS: Information Society; Social Networks; Human Dignity; Rights of the Personality.


2005 ◽  
Vol 15 (4) ◽  
pp. 378-399 ◽  
Author(s):  
Yuval Elovici ◽  
Chanan Glezer ◽  
Bracha Shapira

PurposeTo propose a model of a privacy‐enhanced catalogue search system (PECSS) in an attempt to address privacy threats to consumers, who search for products and services on the world wide web.Design/methodology/approachThe model extends an agent‐based architecture for electronic catalogue mediation by supplementing it with a privacy enhancement mechanism. This mechanism introduces fake queries into the original stream of user queries, in an attempt to reduce the similarity between the actual interests of users (“internal user profile”) and the interests as observed by potential eavesdroppers on the web (“external user profile”). A prototype was constructed to demonstrate the feasibility and effectiveness of the model.FindingsThe evaluation of the model indicates that, by generating five fake queries per each original user query, the user's profile is hidden most effectively from any potential eavesdropper. Future research is needed to identify the optimal glossary of fake queries for various clients. The model also should be tested against various attacks perpetrated against the mixed stream of original and fake queries (i.e. statistical clustering).Research limitations/implicationsThe model's feasibility was evaluated through a prototype. It was not empirically tested against various statistical methods used by intruders to reveal the original queries.Practical implicationsA useful architecture for electronic commerce providers, internet service providers (ISP) and individual clients who are concerned with their privacy and wish to minimize their dependencies on third‐party security providers.Originality/valueThe contribution of the PECSS model stems from the fact that, as the internet gradually transforms into a non‐free service, anonymous browsing cannot be employed any more to protect consumers' privacy, and therefore other approaches should be explored. Moreover, unlike other approaches, our model does not rely on the honesty of any third mediators and proxies that are also exposed to the interests of the client. In addition, the proposed model is scalable as it is installed on the user's computer.


Sign in / Sign up

Export Citation Format

Share Document