The Parallel Problems Server: A Client-Server Model for Interactive Large Scale Scientific Computation

Author(s):  
Parry Husbands ◽  
Charles Isbell
Author(s):  
Steve Sawyer ◽  
William Gibbons

This teaching case describes the efforts of one department in a large organization to migrate from an internally developed, mainframe-based, computing system to a system based on purchased software running on a client/server architecture. The case highlights issues with large scale software implementations such as those demanded by enterprise resource package (ERP) installations. Often, the ERP selected by an organization does not have all the required functionality. This demands purchasing and installing additional packages (known colloquially as “bolt-ons”) to provide the needed functionality. These implementations lead to issues regarding oversight of the technical architecture, both project and technology governance, and user department capability for managing the installation of new systems.


2008 ◽  
Vol 41 (5) ◽  
pp. 913-917 ◽  
Author(s):  
A. R. Round ◽  
D. Franke ◽  
S. Moritz ◽  
R. Huchler ◽  
M. Fritsche ◽  
...  

There is a rapidly increasing interest in the use of synchrotron small-angle X-ray scattering (SAXS) for large-scale studies of biological macromolecules in solution, and this requires an adequate means of automating the experiment. A prototype has been developed of an automated sample changer for solution SAXS, where the solutions are kept in thermostatically controlled well plates allowing for operation with up to 192 samples. The measuring protocol involves controlled loading of protein solutions and matching buffers, followed by cleaning and drying of the cell between measurements. The system was installed and tested at the X33 beamline of the EMBL, at the storage ring DORIS-III (DESY, Hamburg), where it was used by over 50 external groups during 2007. At X33, a throughput of approximately 12 samples per hour, with a failure rate of sample loading of less than 0.5%, was observed. The feedback from users indicates that the ease of use and reliability of the user operation at the beamline were greatly improved compared with the manual filling mode. The changer is controlled by a client–server-based network protocol, locally and remotely. During the testing phase, the changer was operated in an attended mode to assess its reliability and convenience. Full integration with the beamline control software, allowing for automated data collection of all samples loaded into the machine with remote control from the user, is presently being implemented. The approach reported is not limited to synchrotron-based SAXS but can also be used on laboratory and neutron sources.


1986 ◽  
Vol 46 (174) ◽  
pp. 766
Author(s):  
L. B. W. ◽  
Seymour V. Parter

2003 ◽  
Vol 12 (04) ◽  
pp. 411-440 ◽  
Author(s):  
Roberto Silveira Silva Filho ◽  
Jacques Wainer ◽  
Edmundo R. M. Madeira

Standard client-server workflow management systems are usually designed as client-server systems. The central server is responsible for the coordination of the workflow execution and, in some cases, may manage the activities database. This centralized control architecture may represent a single point of failure, which compromises the availability of the system. We propose a fully distributed and configurable architecture for workflow management systems. It is based on the idea that the activities of a case (an instance of the process) migrate from host to host, executing the workflow tasks, following a process plan. This core architecture is improved with the addition of other distributed components so that other requirements for Workflow Management Systems, besides scalability, are also addressed. The components of the architecture were tested in different distributed and centralized configurations. The ability to configure the location of components and the use of dynamic allocation of tasks were effective for the implementation of load balancing policies.


1999 ◽  
Vol 9 (3) ◽  
pp. 277-281 ◽  
Author(s):  
Jeremy D. Parsons ◽  
Eugen Buehler ◽  
LaDeana Hillier

DNA sequence chromatograms (traces) are the primary data source for all large-scale genomic and expressed sequence tags (ESTs) sequencing projects. Access to the sequencing trace assists many later analyses, for example contig assembly and polymorphism detection, but obtaining and using traces is problematic. Traces are not collected and published centrally, they are much larger than the base calls derived from them, and viewing them requires the interactivity of a local graphical client with local data. To provide efficient global access to DNA traces, we developed a client/server system based on flexible Java components integrated into other applications including an applet for use in a WWW browser and a stand-alone trace viewer. Client/server interaction is facilitated by CORBA middleware which provides a well-defined interface, a naming service, and location independence.[The software is packaged as a Jar file available from the following URL: http://www.ebi.ac.uk/∼jparsons. Links to working examples of the trace viewers can be found athttp://corba.ebi.ac.uk/EST. All the Washington University mouse EST traces are available for browsing at the same URL.]


Author(s):  
Jeffrey L. Adler ◽  
Eknauth Persaud

One of the greatest challenges in building an expert system is obtaining, representing, and programming the knowledge base. As the size and scope of the problem domain increases, knowledge acquisition and knowledge engineering become more challenging. Methods for knowledge acquisition and engineering for large-scale projects are investigated in this paper. The objective is to provide new insights as to how knowledge engineers play a role in defining the scope and purpose of expert systems and how traditional knowledge acquisition and engineering methods might be recast in cases where the expert system is a component within a larger scale client-server application targeting multiple users.


1994 ◽  
Vol 3 (3) ◽  
pp. 201-225 ◽  
Author(s):  
Can Özturan ◽  
Balaram Sinharoy ◽  
Boleslaw K. Szymanski

There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL). Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.


1977 ◽  
Vol 1 (2) ◽  
pp. 85-90 ◽  
Author(s):  
Henry F. Schaefer ◽  
William H. Miller

Author(s):  
Valentin Cristea ◽  
Ciprian Dobre ◽  
Corina Stratan ◽  
Florin Pop

This chapter introduces the macroscopic views on distributed systems’ components and their inter-relations. The importance of the architecture for understanding, designing, implementing, and maintaining distributed systems is presented first. Then the currently used architectures and their derivatives are analyzed. The presentation refers to the client-server (with details about Multi-tiered, REST, Remote Evaluation, and Code-on-Demand architectures), hierarchical (with insights in the protocol oriented Grid architecture), service-oriented architectures including OGSA (Open Grid Service Architecture), cloud, cluster, and peer-to-peer (with its versions: hierarchical, decentralized, distributed, and event-based integration architectures). Due to the relation between architecture and application categories supported, the chapter’s structure is similar to that of Chapter 1. Nevertheless, the focus is different. In the current chapter, for each architecture the model, advantages, disadvantages and areas of applicability are presented. Also the chapter includes concrete cases of use (namely actual distributed systems and platforms), and clarifies the relation between the architecture and the enabling technology used in its instantiation. Finally, Chapter 2 frames the discussion in the other chapters, which refer to specific components and services for large scale distributed systems.


Author(s):  
Steve Sawyer ◽  
William Gibbons

This teaching case describes the efforts of one department in a large organization to migrate from an internally developed, mainframe-based, computing system to a system based on purchased software running on a client/server architecture. The case highlights issues with large scale software implementations such as those demanded by enterprise resource package (ERP) installations. Often, the ERP selected by an organization does not have all the required functionality. This demands purchasing and installing additional packages (known colloquially as bolt-ons) to provide the needed functionality. These implementations lead to issues regarding oversight of the technical architecture, both project and technology governance, and user department capability for managing the installation of new systems.


Sign in / Sign up

Export Citation Format

Share Document