Processing Astronomical Image Mosaic Workflows With An Expert Broker In Cloud Computing

2014 ◽  
Vol 19 (4) ◽  
pp. 5-20 ◽  
Author(s):  
Rocío Pérez De Prado ◽  
Sebastián García-Galán ◽  
José Enrique Munoz Expósito ◽  
Luis Ramón López López ◽  
Rafael Rodríguez Reche

Abstract Montage image engine is an astronomical tool created by NASA’s Earth Sciences Technology Office to obtain mosaics of the sky by the processing of multiple images from diverse regions. The associated computational processes involve the recalculation of the images geometry, the re-projection of the rotation and scale, the homogenization of the background emission and the combination of all images in a standardized format to show a final mosaic. These processes are highly computing demanding and structured in the form of workflows. A workflow is a set of individual jobs that allow the parallelization of the workload to be executed in distributed systems and thus, to reduce its finish time. Cloud computing is a distributed computing platform based on the provision of computing resources in the form of services becoming more and more required to perform large scale simulations in many science applications. Nevertheless, a computational cloud is a dynamic environment where resources capabilities can change on the fly depending on the networks demands. Therefore, flexible strategies to distribute workload among the different resources are necessary. In this work, the consideration of fuzzy rule-based systems as local brokers in cloud computing is proposed to speed up the execution of the Montage workflows. Simulations of the expert broker using synthetic workflows obtained from real systems considering diverse sets of jobs are conducted. Results show that the proposal is able to significantly reduce makespan in comparison to well-known scheduling strategies in distributed systems and in this way, to offer an efficient solution to accelerate the processing of astronomical image mosaic workflows.

2020 ◽  
Vol 29 (2) ◽  
pp. 1-24
Author(s):  
Yangguang Li ◽  
Zhen Ming (Jack) Jiang ◽  
Heng Li ◽  
Ahmed E. Hassan ◽  
Cheng He ◽  
...  

2014 ◽  
Vol 687-691 ◽  
pp. 3733-3737
Author(s):  
Dan Wu ◽  
Ming Quan Zhou ◽  
Rong Fang Bie

Massive image processing technology requires high requirements of processor and memory, and it needs to adopt high performance of processor and the large capacity memory. While the single or single core processing and traditional memory can’t satisfy the need of image processing. This paper introduces the cloud computing function into the massive image processing system. Through the cloud computing function it expands the virtual space of the system, saves computer resources and improves the efficiency of image processing. The system processor uses multi-core DSP parallel processor, and develops visualization parameter setting window and output results using VC software settings. Through simulation calculation we get the image processing speed curve and the system image adaptive curve. It provides the technical reference for the design of large-scale image processing system.


Author(s):  
TAJ ALAM ◽  
PARITOSH DUBEY ◽  
ANKIT KUMAR

Distributed systems are efficient means of realizing high-performance computing (HPC). They are used in meeting the demand of executing large-scale high-performance computational jobs. Scheduling the tasks on such computational resources is one of the prime concerns in the heterogeneous distributed systems. Scheduling jobs on distributed systems are NP-complete in nature. Scheduling requires either heuristic or metaheuristic approach for sub-optimal but acceptable solutions. An adaptive threshold-based scheduler is one such heuristic approach. This work proposes adaptive threshold-based scheduler for batch of independent jobs (ATSBIJ) with the objective of optimizing the makespan of the jobs submitted for execution on cloud computing systems. ATSBIJ exploits the features of interval estimation for calculating the threshold values for generation of efficient schedule of the batch. Simulation studies on CloudSim ensures that the ATSBIJ approach works effectively for real life scenario.


2011 ◽  
Vol 21 (02) ◽  
pp. 133-154 ◽  
Author(s):  
ANNE-CECILE ORGERIE ◽  
LAURENT LEFEVRE

At the age of petascale machines, cloud computing and peer-to-peer systems, large-scale distributed systems need an ever-increasing amount of energy. These systems urgently require effective and scalable solutions to manage and limit their electrical consumption. As of now, most efforts are focused on energy-efficient hardware designs. Thus, the challenge is to coordinate all these low-level improvements at the middleware level to improve the energy efficiency of the overall systems. Resource-management solutions can indeed benefit from a broader view to pool the resources and to share them according to the needs of each user. In this paper, we propose ERIDIS, an Energy-efficient Reservation Infrastructure for large-scale DIstributed Systems. It provides a unified and generic framework to manage resources from Grids, Clouds and dedicated networks in an energy-efficient way.


2011 ◽  
Vol 121-126 ◽  
pp. 4023-4027 ◽  
Author(s):  
Guang Ming Li ◽  
Wen Hua Zeng ◽  
Jian Feng Zhao ◽  
Min Liu

The implementation platforms of parallel genetic algorithms (PGAs) include high performance computer, cluster and Grid. Contrast with the traditional platform, a Master-slave PGA based on MapReduce (MMRPGA) of cloud computing platform was proposed. Cloud computing is a new computer platform, suites for larger-scale computing and is low cost. At first, describes the design of MMRPGA, in which the whole evolution is controlled by Master and the fitness computing is assigned to Slaves; then deduces the theoretical speed-up of MMRPGA; at last, implements MMRPGA on Hadoop and compares the speed-up with traditional genetic algorithm, the experiment result shows MMRPGA can achieve slightly lower linear speed-up with Mapper’s number.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Fengxia Li ◽  
Zhi Qu ◽  
Ruiling Li

In recent years, cloud computing technology is maturing in the process of growing. Hadoop originated from Apache Nutch and is an open-source cloud computing platform. Moreover, the platform is characterized by large scale, virtualization, strong stability, strong versatility, and support for scalability. It is necessary and far-reaching, based on the characteristics of unstructured medical images, to combine content-based medical image retrieval with the Hadoop cloud platform to conduct research. This study combines the impact mechanism of senile dementia vascular endothelial cells with cloud computing to construct a corresponding data retrieval platform of the cloud computing image set. Moreover, this study uses Hadoop’s core framework distributed file system HDFS to upload images, store the images in the HDFS and image feature vectors in HBase, and use MapReduce programming mode to perform parallel retrieval, and each of the nodes cooperates with each other. The results show that the proposed method has certain effects and can be applied to medical research.


Author(s):  
Wagner Al Alam ◽  
Francisco Carvalho Junior

The efforts to make cloud computing suitable for the requirements of HPC applications have motivated us to design HPC Shelf, a cloud computing platform of services for building and deploying parallel computing systems for large-scale parallel processing. We introduce Alite, the system of contextual contracts of HPC Shelf, aimed at selecting component implementations according to requirements of applications, features of targeting parallel computing platforms (e.g. clusters), QoS (Quality-of-Service) properties and cost restrictions. It is evaluated through a small-scale case study employing a componentbased framework for matrix-multiplication based on the BLAS library.


Sign in / Sign up

Export Citation Format

Share Document