Architectural Foundations of WSRF NET

Author(s):  
Glenn Wasson ◽  
Marty Humphrey

State management has always been an underlying issue for large scale distributed systems, but it has only recently been brought to the forefront of Grid computing with the introduction of the Web services resource framework (WSRF) and its companion WS-notification. WSRF advocates standardized approaches for client exposure to and potential manipulation of stateful services for Grid computing; however, these arguments and their long term implications have been difficult to assess without a concrete implementation of the WSRF specifications. This chapter describes the architectural foundations of WSRF.NET, which is an implementation of the full set of specifications for WSRF and WS-notification on the Microsoft .NET framework. To our knowledge, the observations and lessons learned from the design and implementation of WSRF.NET provide the first evaluation of the WSRF approach. A concrete example of the design, implementation and deployment of a WSRF-compliant service and its accompanying WSRF-compliant client are used to guide the discussion. While the potential of WSRF and WS-notification remains strong, initial observations are that there are many challenges that remain to be solved, most notably the implied programming model derived from the specifications, particularly the complexity of service-side and client-code and the complexity of WS-notification.

2006 ◽  
Vol 1 (1) ◽  
pp. 46-71 ◽  
Author(s):  
Itsuki Nakabayashi ◽  

This treatise outlines developments in disaster management focusing on earthquake disaster measures taken by the Japanese and Tokyo Metropolitan Governments since the 1980s. The 1978 Large-Scale Earthquake Measures Special Act on conditions for predicting the Tokai Earthquake significantly changed the direction of earthquake disaster measures in Japan. The Tokyo Metropolitan Government undertook its own earthquake disaster measures based on lessons learned from the 1964 Niigata Earthquake. In the 1980s, it began planning urban development disaster management programs for upgrading areas with high wooden houses concentration - still a big problem in many urban areas of Japan - which are most vulnerable to earthquake disasters. The 1995 Great Hanshin-Awaji Earthquake in Kobe brought meaningful insight into both to earthquake disaster measures by the Japanese Government and by the Tokyo Metropolitan Government and other local governments nationwide. Long-term predictions concerning possible earthquake occurrence have been conducted throughout Japan and new earthquake disaster measures have been adopted based on this long-term prediction. The Tokyo Government has further completely revised its own earthquake disaster measures. As a review of measures against foreseeable earthquake disasters based on developments in disaster management measures, this treatise provides invaluable insights emphasizing urban earthquake disaster prevention developed in Japan over the last 30 years that readers are sure to find both interesting and informative in their own work.


2010 ◽  
Vol 26 (1) ◽  
pp. 1-3 ◽  
Author(s):  
William F. Schillinger

AbstractMany lessons in long-term cropping systems experiments are learned from practical experience. I have conducted large-scale, long-term, multidisciplinary dryland and irrigated cropping systems experiments with numerous colleagues at university and government research stations and in farmers' fields in the USA and in developing countries for 25 years. Several practical lessons learned through the years are outlined in this short commentary. While some of these lessons learned may be intrinsically obvious, results of many cropping systems experiments have not been published in scientific journals due to fatal flaws in experimental design, improper transitioning between phases of the experiment and many other reasons. Ongoing active support by stakeholders is critical to maintain funding for long-term cropping systems studies. Problems and unexpected challenges will occur, but scientists can often parlay these into opportunities for discovery and testing of new hypotheses. Better understanding and advancement of stable, profitable and sustainable cropping systems will be critical for feeding the world's projected 10 billion people by the mid-21st century.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Yimin Zhou ◽  
Zhifei Li ◽  
Xinyu Wu

In the paper, the effect of the charging behaviours of electric vehicles (EVs) on the grid load is discussed. The residential traveling historical data of EVs are analyzed and fitted to predict their probability distribution, so that the models of the traveling patterns can be established. A nonlinear stochastic programming model with the maximized comprehensive index is developed to analyze the charging schemes, and a heuristic searching algorithm is used for the optimal parameters configuration. With the comparison of the evaluation criteria, the multiobjective strategy is more appropriate than the single-objective strategy for the charging, i.e., electricity price. Furthermore, considering the characteristics of the normal batteries and charging piles, user behaviour and EV scale, a Monte Carlo simulation process is designed to simulate the large-scale EVs traveling behaviours in long-term periods. The obtained simulation results can provide prediction for the analysis of the energy demand growth tendency of the future EVs regulation. As a precedent of open-source simulation system, this paper provides a stand-alone strategy and architecture to regulate the EV charging behaviours without the unified monitoring or management of the grid.


2015 ◽  
Vol 3 (1) ◽  
Author(s):  
Jessie Lissenden ◽  
Siri Maley ◽  
Khanjan Mehta

As we develop practical, innovative and sustainable technology solutions for resource-constrained settings, what can we learn from the Appropriate Technology (AT) movement? Based on a review of academic literature over the past 35 years, this article identifies, and chronologically maps, the defining tenets and metrics of success advocated by scholars. The literature has gradually evolved from general musings into concrete lessons learned, while the definitions of “success” have transitioned from laboratory success into practical application and long-term usefulness. Nonetheless, juxtaposing this scholastic history with actual projects reveals three major gaps in AT philosophy related to a lack of 1) bilateral knowledge exchange, 2) emphasis on venture scalability, and 3) integration of implementation strategy through the project lifecycle. This article argues that rethinking and repositioning AT with a human-centric narrative emphasizing sustainability and scalability is imperative in order to revitalize and accelerate the AT movement and to achieve the large-scale impact it was expected to deliver.


Author(s):  
Lynne Siemens ◽  
The INKE Research Group

Many academic teams and granting agencies undergo a process of reflection at the completion of research projects to understand lessons learned and develop best practice guidelines. Generally completed at the project’s end, these reviews focus on the actual research work accomplished with little discussion of the work relationships and process involved. As a result, some hard-earned lessons are forgotten or minimized through the passage of time. Additional learning about the nature of collaboration may be gained if this type of reflection occurs during the project’s life. Building on earlier examinations of INKE, this paper contributes to that discussion with an exploration of seventh and final year of a large-scale research project.Implementing New Knowledge Environment (INKE) serves as a case study for this research. Members of the administrative team, researchers, postdoctoral fellows, graduate research assistants, and others are asked about their experiences collaborating within INKE on an annual basis in order to understand the nature of collaboration and ways that it may change over the life of a long-term grant. Interviewees continue to outline benefits for collaboration within INKE while admitting that there continue to be challenges. They also outline several lessons learned which will be applied to the next project.


Author(s):  
Nane Kratzke

The data from social networks like Twitter is a valuable source for research but full of redundancy, making it hard to provide large-scale, self-contained, and small datasets. The data recording is a common problem in social media-based studies and could be standardized. Sadly, this is hardly done. This paper reports on lessons learned from a long-term evaluation study recording the complete public sample of the German and English Twitter stream. It presents a recording solution proposal that merely chunks a linear stream of events to reduce redundancy. If events are observed multiple times within the time-span of a chunk, only the latest observation is written to the chunk. A 10 Gigabyte Twitter raw dataset covering 1,2 Million Tweets of 120.000 users recorded between June and September 2017 was used to analyze expectable compression rates. It turned out that resulting datasets need only between 10\% and 20\% of the original data size without losing any event, metadata or the relationships between single events. This kind of redundancy reduction recording makes it possible to curate large-scale (even nation-wide), self-contained, and small datasets of social networks for research in a standardized and reproducible manner.


Author(s):  
Frederik Schulze Spüntrup ◽  
Giancarlo Dalle Ave ◽  
Lars Imsland ◽  
Iiro Harjunkoski

AbstractLarge fleets of engineering assets that are subject to ongoing degradation are posing the challenge of how and when to perform maintenance. For a given case study, this paper proposes a formulation for combined scheduling and planning of maintenance actions. A hierarchical approach and a two-stage approach (with either uniform or non-uniform time grid) are considered and compared to each other. The resulting discrete-time linear programming model follows the Resource Task Network framework. Asset deterioration is considered linearly and tackled with an enumerator-based formulation. Advantages of the model are its computational efficiency, scalability, extendability and adaptability. The results indicate that combined maintenance planning and scheduling can be solved in appropriate time and with appropriate accuracy. The decision-support that is delivered helps the choice of the specific maintenance action to perform and proposes when to conduct it. The paper makes a case for the benefits of optimally combining long-term planning and short-term scheduling in industrial-sized problems into one system.


1997 ◽  
Vol 12 (2) ◽  
pp. 49-60 ◽  
Author(s):  
Victor S. Koscheyev ◽  
Gloria R. Leon ◽  
Ian A. Greaves

AbstractBackground:This paper examines the considerable medical and psychological problems that ensue after disasters in which massive populations are affected for extended and sometimes unknown time periods. The organization of disaster response teams after large-scale disasters is based on experiences as a medical specialist at Chernobyl immediately after this catastrophe. Optimal ways of dealing with the immediate medical and logistical demands as well as long-term public health problems are explored with a particular focus on radiation disasters. Other lessons learned from Chernobyl are explained.Issues:Current concerns involve the constant threat of a disaster posed by aging nuclear facilities and nuclear and chemical disarmament activities. The strategies that have been used by various groups in responding to a disaster and dealing with medical and psychological health effects at different disaster stages are evaluated. The emergence of specialized centers in the former Soviet Union to study long-term health effects after radiation accidents are described. Worldwide, there has been relatively little attention paid to mid- and long-term health effects, particularly the psychological stress effects. Problems in conducting longitudinal health research are explored.Recommendations:The use of a mobile diagnostic and continuously operating prehospital triage system for rapid health screening of large populations at different stages after a large-scale disaster is advisable. The functional systems of the body to be observed at different stages after a radiation disaster are specified. There is a particularly strong need for continued medical and psychosocial evaluation of radiation exposed populations over an extended time and a need for international collaboration among investigators.


Sign in / Sign up

Export Citation Format

Share Document