Large-Scale Software Testing Environment Using Cloud Computing Technology for Dependable Parallel and Distributed Systems

Author(s):  
Toshihiro Hanawa ◽  
Takayuki Banzai ◽  
Hitoshi Koizumi ◽  
Ryo Kanbayashi ◽  
Takayuki Imada ◽  
...  
Author(s):  
TAJ ALAM ◽  
PARITOSH DUBEY ◽  
ANKIT KUMAR

Distributed systems are efficient means of realizing high-performance computing (HPC). They are used in meeting the demand of executing large-scale high-performance computational jobs. Scheduling the tasks on such computational resources is one of the prime concerns in the heterogeneous distributed systems. Scheduling jobs on distributed systems are NP-complete in nature. Scheduling requires either heuristic or metaheuristic approach for sub-optimal but acceptable solutions. An adaptive threshold-based scheduler is one such heuristic approach. This work proposes adaptive threshold-based scheduler for batch of independent jobs (ATSBIJ) with the objective of optimizing the makespan of the jobs submitted for execution on cloud computing systems. ATSBIJ exploits the features of interval estimation for calculating the threshold values for generation of efficient schedule of the batch. Simulation studies on CloudSim ensures that the ATSBIJ approach works effectively for real life scenario.


2014 ◽  
Vol 19 (4) ◽  
pp. 5-20 ◽  
Author(s):  
Rocío Pérez De Prado ◽  
Sebastián García-Galán ◽  
José Enrique Munoz Expósito ◽  
Luis Ramón López López ◽  
Rafael Rodríguez Reche

Abstract Montage image engine is an astronomical tool created by NASA’s Earth Sciences Technology Office to obtain mosaics of the sky by the processing of multiple images from diverse regions. The associated computational processes involve the recalculation of the images geometry, the re-projection of the rotation and scale, the homogenization of the background emission and the combination of all images in a standardized format to show a final mosaic. These processes are highly computing demanding and structured in the form of workflows. A workflow is a set of individual jobs that allow the parallelization of the workload to be executed in distributed systems and thus, to reduce its finish time. Cloud computing is a distributed computing platform based on the provision of computing resources in the form of services becoming more and more required to perform large scale simulations in many science applications. Nevertheless, a computational cloud is a dynamic environment where resources capabilities can change on the fly depending on the networks demands. Therefore, flexible strategies to distribute workload among the different resources are necessary. In this work, the consideration of fuzzy rule-based systems as local brokers in cloud computing is proposed to speed up the execution of the Montage workflows. Simulations of the expert broker using synthetic workflows obtained from real systems considering diverse sets of jobs are conducted. Results show that the proposal is able to significantly reduce makespan in comparison to well-known scheduling strategies in distributed systems and in this way, to offer an efficient solution to accelerate the processing of astronomical image mosaic workflows.


2014 ◽  
Vol 1070-1072 ◽  
pp. 759-764
Author(s):  
Yu Jia Li ◽  
Qing Bo Yang ◽  
Jing Hua ◽  
Fang Chun Di ◽  
Li Xin Li

Problems such as high cost of building a testing environment, low degree of automation and low rates of resource utilization are commonly met during the dispatching automation master system software testing. Cloud Computing technologies were introduced to solve these problems. Test methods based on the virtualization and other key technologies were studied and a testing platform includes user management module, testing resource management module, testing management module and man-machine interaction interface module was built. A method composed of three testing modes was presented which can be applied usefully in the static and dynamical testing of dispatching automation master system. Future research direction is prospected in the end of the paper.


2011 ◽  
Vol 21 (02) ◽  
pp. 133-154 ◽  
Author(s):  
ANNE-CECILE ORGERIE ◽  
LAURENT LEFEVRE

At the age of petascale machines, cloud computing and peer-to-peer systems, large-scale distributed systems need an ever-increasing amount of energy. These systems urgently require effective and scalable solutions to manage and limit their electrical consumption. As of now, most efforts are focused on energy-efficient hardware designs. Thus, the challenge is to coordinate all these low-level improvements at the middleware level to improve the energy efficiency of the overall systems. Resource-management solutions can indeed benefit from a broader view to pool the resources and to share them according to the needs of each user. In this paper, we propose ERIDIS, an Energy-efficient Reservation Infrastructure for large-scale DIstributed Systems. It provides a unified and generic framework to manage resources from Grids, Clouds and dedicated networks in an energy-efficient way.


Author(s):  
Toshihiro Hanawa ◽  
Mitsuhisa Sato

Various information systems are widely used in the information society era, and the demand for highly dependable system is increasing year after year. However, software testing for such a system becomes more difficult due to the enlargement and the complexity of the system. In particular, it is often difficult to test parallel and distributed systems in the real world after deployment, although reliable systems, such as high-availability servers, are parallel and distributed systems. To solve these problems, the authors propose a software testing environment for dependable parallel and distributed system using the cloud computing technology, named D-Cloud. D-Cloud consists of the cloud management software as the role of the resource management, and a lot of virtual machine monitors with fault injection facility in order to simulate hardware faults. In addition, D-Cloud introduces the scenario manager, and it makes a number of different tests perform automatically. Currently, D-Cloud is realized by the use of Eucalyptus as the cloud management software. Furthermore, the authors introduce FaultVM based on QEMU as the virtualization software, and D-Cloud frontend that interprets test scenario, constructs test environment, and dispatches commands. D-Cloud enables automating the system configuration and the test procedure as well as performing a number of test cases simultaneously and emulating hardware faults flexibly. This chapter presents the concept and design of D-Cloud, and describes how to specify the system configuration and the test scenario. Furthermore, the preliminary test example as the software testing using D-Cloud is presented. As the result, the authors show that D-Cloud allows easy setup of the environment, and to test the software testing for the distributed system.


2013 ◽  
Vol 385-386 ◽  
pp. 1708-1712
Author(s):  
Xiao Ping Jiang ◽  
Teng Jiang ◽  
Tao Zhang ◽  
Cheng Hua Li

By combining LVS cluster architecture and could computing technology, system architecture of the cloud computing service platform is proposed. Cloud computing technology is suitable to support large-scale applications with flash crowds by support elastic amounts of bandwidth and storage resource etc. But traditional algorithms of load balancing provided by LVS are unsuitable for the proposed service platform, because these algorithms are designed for static server resource provided by traditional cluster technology. Taking both the overall utilization rate of server resources and the active connections of the server into counter, an adaptive adjustable load balancing algorithms (Least Comprehensive Utilization and Connection Scheduling algorithm, called LUCU) is proposed in this paper. According the utilization of cloud resource and the users demand, automatic switching between Round Robin (RR) algorithm and LUCU algorithm is achieved. When the cloud capacities are not able to meet the instantaneous demands, LUCU is chosen instead of RR. The proposed platform and algorithm are verified and evaluated using large-scare simulation experiments. The test results show that the equilibrium load is nearly achieved by adopting the proposed algorithms.


2020 ◽  
Vol 17 (9) ◽  
pp. 4411-4418
Author(s):  
S. Jagannatha ◽  
B. N. Tulasimala

In the world of information communication technology (ICT) the term Cloud Computing has been the buzz word. Cloud computing is changing its definition the way technocrats are using it according to the environment. Cloud computing as a definition remains very contentious. Definition is stated liable to a particular application with no unanimous definition, making it altogether elusive. In spite of this, it is this technology which is revolutionizing the traditional usage of computer hardware, software, data storage media, processing mechanism with more of benefits to the stake holders. In the past, the use of autonomous computers and the nodes that were interconnected forming the computer networks with shared software resources had minimized the cost on hardware and also on the software to certain extent. Thus evolutionary changes in computing technology over a few decades has brought in the platform and environment changes in machine architecture, operating system, network connectivity and application workload. This has made the commercial use of technology more predominant. Instead of centralized systems, parallel and distributed systems will be more preferred to solve computational problems in the business domain. These hardware are ideal to solve large-scale problems over internet. This computing model is data-intensive and networkcentric. Most of the organizations with ICT used to feel storing of huge data, maintaining, processing of the same and communication through internet for automating the entire process a challenge. In this paper we explore the growth of CC technology over several years. How high performance computing systems and high throughput computing systems enhance computational performance and also how cloud computing technology according to various experts, scientific community and also the service providers is going to be more cost effective through different dimensions of business aspects.


Sign in / Sign up

Export Citation Format

Share Document