Predicting Macroscopic Dynamics in Large Distributed Systems: Part II

Author(s):  
K. L. Mills ◽  
J. J. Filliben ◽  
D.-Y. Cho ◽  
E. J. Schwartz

Society increasingly depends on large distributed systems, such as the Internet and Web-based service-oriented architectures deployed over the Internet. Such systems constantly evolve as new software components are injected to provide increased functionality, better performance and enhanced security. Unfortunately, designers lack effective methods to predict how new components might influence macroscopic behavior. Lacking effective methods, designers rely on engineering techniques, such as: analysis of critical algorithms at small scale and under limiting assumptions; factor-at-a-time simulations conducted at modest scale; and empirical measurements in small test beds. Such engineering techniques enable designers to characterize selected properties of new components but reveal little about likely dynamics at global scale. In this paper, we outline an approach that can be used to predict macroscopic dynamics when new components are deployed in a large distributed system. Our approach combines two main methods: scale reduction and multidimensional data analysis techniques. Combining these methods, we can search a wide parameter space to identify factors likely to drive global system response and we can predict the resulting macroscopic dynamics of key system behaviors. We demonstrate our approach in the context of the Internet, where researchers, motivated by a desire to increase user performance, have proposed new algorithms to replace the standard congestion control mechanism. Previously, the proposed algorithms were studied in three ways: using analytical models of single data flows, using empirical measurements in test beds where a few data flows compete for bandwidth, and using simulations at modest scale with a few sequentially varied parameters. In contrast, by applying our approach, we simulated configurations covering four-tier network topologies, spanning continental and global distances, comprising routers operating at state-of-the-art speeds and transporting more than 105 simultaneous data flows with varying traffic patterns and temporary spatiotemporal congestion. Our findings identify the main factors influencing macroscopic dynamics of Internet congestion control, and define the specific combination of factors that must hold for users to realize improved performance. We also uncover potential for one proposed algorithm to cause widespread performance degradation. Previous engineering studies of the proposed congestion control algorithms were unable to reveal such essential information.

Author(s):  
K. L. Mills ◽  
J. J. Filliben ◽  
D.-Y. Cho ◽  
E. J. Schwartz

Society increasingly depends on large distributed systems, such as the Internet and Web-based service-oriented architectures deployed over the Internet. Such systems constantly evolve as new software components are injected to provide increased functionality, better performance and enhanced security. Unfortunately, designers lack effective methods to predict how new components might influence macroscopic behavior. Lacking effective methods, designers rely on engineering techniques, such as: analysis of critical algorithms at small scale and under limiting assumptions; factor-at-a-time simulations conducted at modest scale; and empirical measurements in small test beds. Such engineering techniques enable designers to characterize selected properties of new components but reveal little about likely dynamics at global scale. In this paper, we outline an approach that can be used to predict macroscopic dynamics when new components are deployed in a large distributed system. Our approach combines two main methods: scale reduction and multidimensional data analysis techniques. Combining these methods, we can search a wide parameter space to identify factors likely to drive global system response and we can predict the resulting macroscopic dynamics of key system behaviors. We demonstrate our approach in the context of the Internet, where researchers, motivated by a desire to increase user performance, have proposed new algorithms to replace the standard congestion control mechanism. Previously, the proposed algorithms were studied in three ways: using analytical models of single data flows, using empirical measurements in test beds where a few data flows compete for bandwidth, and using simulations at modest scale with a few sequentially varied parameters. In contrast, by applying our approach, we simulated configurations covering four-tier network topologies, spanning continental and global distances, comprising routers operating at state-of-the-art speeds and transporting more than 105 simultaneous data flows with varying traffic patterns and temporary spatiotemporal congestion. Our findings identify the main factors influencing macroscopic dynamics of Internet congestion control, and define the specific combination of factors that must hold for users to realize improved performance. We also uncover potential for one proposed algorithm to cause widespread performance degradation. Previous engineering studies of the proposed congestion control algorithms were unable to reveal such essential information.


Water Policy ◽  
2003 ◽  
Vol 5 (3) ◽  
pp. 203-212
Author(s):  
J. Lisa Jorgensona

This paper discusses a series of discusses how web sites now report international water project information, and maps the combined donor investment in more than 6000 water projects, active since 1995. The maps show donor investment:  • has addressed water scarcity,  • has improved access to improvised water resources,  • correlates with growth in GDP,  • appears to show a correlation with growth in net private capital flow,  • does NOT appear to correlate with growth in GNI. Evaluation indicates problems in the combined water project portfolios for major donor organizations: •difficulties in grouping projects over differing Sector classifications, food security, or agriculture/irrigation is the most difficult.  • inability to map donor projects at the country or river basin level because 60% of the donor projects include no location data (town, province, watershed) in the title or abstracts available on the web sites.  • no means to identify donor projects with utilization of water resources from training or technical assistance.  • no information of the source of water (river, aquifer, rainwater catchment).  • an identifiable quantity of water (withdrawal amounts, or increased water efficiency) is not provided.  • differentiation between large scale verses small scale projects. Recommendation: Major donors need to look at how the web harvests and combines their information, and look at ways to agree on a standard template for project titles to include more essential information. The Japanese (JICA) and the Asian Development Bank provide good models.


2019 ◽  
Vol 6 (1) ◽  
pp. 47-63 ◽  
Author(s):  
Bettina Nissen ◽  
Ella Tallyn ◽  
Kate Symons

Abstract New digital technologies such as Blockchain and smart contracting are rapidly changing the face of value exchange, and present new opportunities and challenges for designers. Designers and data specialists are at the forefront of exploring new ways of exchanging value, using Blockchain, cryptocurrencies, smart contracting and the direct exchanges between things made possible by the Internet of Things (Tallyn et al. 2018; Pschetz et al. 2019). For researchers and designers in areas of Human Computer Interaction (HCI) and Interaction Design to better understand and explore the implications of these emerging and future technologies as Distributed Autonomous Organisations (DAOs) we delivered a workshop at the ACM conference Designing Interactive Systems (DIS) in Edinburgh in 2017 (Nissen et al. 2017). The workshop aimed to use the lens of DAOs to introduce the principle that products and services may soon be owned and managed collectively and not by one person or authority, thus challenging traditional concepts of ownership and power. This workshop builds on established HCI research exploring the role of technology in financial interactions and designing for the rapidly changing world of technology and value exchange (Kaye et al. 2014; Malmborg et al. 2015; Millen et al. 2015; Vines et al. 2014). Beyond this, the HCI community has started to explore these technologies beyond issues of finance, money and collaborative practice, focusing on the implications of these emerging but rapidly ascending distributed systems in more applied contexts (Elsden et al. 2018a). By bringing together designers and researchers with different experiences and knowledge of distributed systems, the aim of this workshop was two-fold. First, to further understand, develop and critique these new forms of distributed power and ownership and second, to practically explore how to design interactive products and services that enable, challenge or disrupt existing and emerging models.


2015 ◽  
Vol 47 (1) ◽  
pp. 5-17
Author(s):  
Jolanta Korycka-Skorupa

Abstract The author discuss effectiveness of cartographic presentations. The article includes opinions of cartographers regarding effectiveness, readability and efficiency of a map. It reminds the principles of map graphic design in order to verify them using examples of small-scale thematic maps. The following questions have been asked: Is the map effective? Why is the map effective? How do cartographic presentation methods affect effectiveness of the cartographic message? What else can influence effectiveness of a map? Each graphic presentation should be effective, as its purpose is to complete written word, draw the recipients’ attention, make text more readable, expose the most important information. Such a significant role of graphics results in the fact that graphic presentations (maps, diagrams) require proper preparation. Users need to have a chance to understand the graphics language in order to draw correct conclusions about the presented phenomenon. Graphics should demonstrate the most important elements, some tendencies, and directions of changes. It should generalize and present a given subject from a slightly different perspective. There are numerous examples of well-edited and poorly edited small-scale thematic maps. They include maps, which are impossible to interpret correctly. They are burdened with methodological defects and they cannot fulfill their task. Cartography practice indicates that the principles related to graphic design of cartographic presentation are frequently omitted during the process of developing small-scale thematic maps used – among others – in the press and on the Internet. The purpose of such presentations is to quickly interpret them. On such maps editors’ problems with the selection of an appropriate symbol and graphic variable (fig. 1A, 9B) are visible. Sometimes they use symbols which are not sufficiently distinguishable nor demonstrative (fig. 11), it does not increase their readability. Sometime authors try too hard to reflect presented phenomenon and therefore the map becomes more difficult to interpret (fig. 4A,B). The lack of graphic sense resulting in the lack of graphic balance and aesthetics constitutes a weak point of numerous cartographic presentations (fig. 13). Effectiveness of cartographic presentations consists of knowledge and skills of the map editor, as well as the recipients’ perception capabilities and their readiness to read and interpret maps. The qualifications of the map editor should include methodological qualifications supported by the knowledge of the principles for cartographic symbol design, as well as relevant technical qualifications, which allow to properly use the tools to edit a map. Maps facilitate the understanding of texts they accompany and they present relationships between phenomenon better than texts, appealing to the senses.


Author(s):  
Raju Ahmed Shetu ◽  
Tarik Toha ◽  
Mohammad Mosiur Rahman Lunar ◽  
Novia Nurain ◽  
A. B. M. Alim Al Islam

2015 ◽  
Vol 14 (04) ◽  
pp. 671-700 ◽  
Author(s):  
SUSAN AARONSON

AbstractHerein, we examine how the United States and the European Union use trade agreements to advance the free flow of information and to promote digital rights online. In the 1980s and 1990s, after US policymakers tried to include language governing the free flow of information in trade agreements, other nations feared a threat to their sovereignty and their ability to restrict cross-border data flows in the interest of privacy or national security.In the twenty-first century, again many states have not responded positively to US and EU efforts to facilitate the free flow of information. They worry that the US dominates both the Internet economy and Internet governance in ways that benefit its interests. After the Snowden allegations, many states adopted strategies that restricted rather than enhanced the free flow of information. Without deliberate intent, efforts to set information free through trade liberalization may be making the Internet less free.Finally, the two trade giants are not fully in agreement on Internet freedom, but neither has linked policies to promote the free flow of information with policies to advance digital rights. Moreover, they do not agree as to when restrictions on information are necessary and when they are protectionist.


2022 ◽  
Vol 16 (1) ◽  
pp. 1-27
Author(s):  
Kyle Crichton ◽  
Nicolas Christin ◽  
Lorrie Faith Cranor

With the ubiquity of web tracking, information on how people navigate the internet is abundantly collected yet, due to its proprietary nature, rarely distributed. As a result, our understanding of user browsing primarily derives from small-scale studies conducted more than a decade ago. To provide an broader updated perspective, we analyze data from 257 participants who consented to have their home computer and browsing behavior monitored through the Security Behavior Observatory. Compared to previous work, we find a substantial increase in tabbed browsing and demonstrate the need to include tab information for accurate web measurements. Our results confirm that user browsing is highly centralized, with 50% of internet use spent on 1% of visited websites. However, we also find that users spend a disproportionate amount of time on low-visited websites, areas with a greater likelihood of containing risky content. We then identify the primary gateways to these sites and discuss implications for future research.


2003 ◽  
Vol 4 (4) ◽  
pp. 251-263 ◽  
Author(s):  
Stein Kristiansen ◽  
Bjørn Furuholt ◽  
Fathul Wahid

Internet cafés represent a potential means of bridging the information gap between social groups and geographical areas This study examines the spread of Internet cafés in Indonesia The main objectives are to identify characteristics of Internet café entrepreneurs and to enhance the understanding of preconditions for the provision of Internet access by small-scale private enterprises. A survey methodology is used and the data reveal clear statistical associations between entrepreneurial adaptations, such as connection types and service variety, and success variables. The authors' policy recommendations include government intervention, primarily in infrastructure development and awareness creation, for a more equitable spread of access to information through the Internet.


Author(s):  
Bahman Aboulhasanzadeh ◽  
Siju Thomas ◽  
Jiacai Lu ◽  
Gretar Tryggvason

In direct numerical simulations (DNS) of multiphase flows it is frequently found that features much smaller than the “dominant” flow scales emerge. Those features consist of thin films, filaments, drops, and boundary layers, and usually surface tension is strong so the geometry is simple. Inertia effects are also small so the flow is simple and often there is a clear separation of scales between those features and the rest of the flow. Thus it is often possible to describe the evolution of this flow by analytical models. Here we discuss two examples of the use of analytical models to account for small-scale features in DNS of multiphase flows. For the flow in the film beneath a drop sliding down a sloping wall we capture the evolution of films that are too thin to be accurately resolved using a grid that is sufficient for the rest of the flow by a thin film model. The other example is the mass transfer from a gas bubbly rising in a liquid. Since diffusion of mass is much slower than the diffusion of momentum, the mass transfer boundary layer is very thin and can be captured by a simple boundary layer model.


Sign in / Sign up

Export Citation Format

Share Document