High performance SAR processing on an heterogeneous distributed environment

Author(s):  
F. P. Lovergine ◽  
N. Veneziani
2019 ◽  
Vol 17 (2) ◽  
pp. 207-214
Author(s):  
Raju Bhukya ◽  
Sumit Deshmuk

The indispensable knowledge of Deoxyribonucleic Acid (DNA) sequences and sharply reducing cost of the DNA sequencing techniques has attracted numerous researchers in the field of Genetics. These sequences are getting available at an exponential rate leading to the bulging size of molecular biology databases making large disk arrays and compute clusters inevitable for analysis.In this paper, we proposed referential DNA data compression using hadoop MapReduce Framework to process humongous amount of genetic data in distributed environment on high performance compute clusters. Our method has successfully achieved a better balance between compression ratio and the amount of time required for DNA data compression as compared to other Referential DNA Data Compression methods.


2021 ◽  
Vol 1 (1) ◽  
pp. 17-30
Author(s):  
Jamal Kh-madhloom

Fog computing is a segment of cloud computing where a vast number of peripheral equipment links to the internet. The term "fog" indicates the edges of a cloud in which high performance can be achieved. Many of these devices will generate voluminous raw data as from sensors, and rather than forward all this data to cloud-based servers to be processed, the idea behind fog computing is to do as much processing as possible using computing units co-located with the data-generating devices, so that processed rather than raw data is forwarded, and bandwidth requirements are reduced. A major advantage of processing locally is that data is more often used for the same computation machine which produced the data. Also, the latency between data production and data consumption was reduced. This example is not fully original, since specially programmed hardware has long been used for signal processing. The work presents the integration of software defined networking with the association of fog environment to have the cavernous implementation patterns in the health care industry with higher degree of accuracy.


2019 ◽  
Vol 12 (1) ◽  
pp. 29 ◽  
Author(s):  
Maria Daniela Graziano ◽  
Alfredo Renga ◽  
Marco Grasso ◽  
Antonio Moccia

Formation-flying synthetic aperture radar (FF-SAR) enables new working modes and can achieve very high performance through a series of very compact, low-weight, satellite platforms thanks to passive operations of conveniently distributed formation-flying receivers. System timing is a crucial aspect of FF-SAR. The manuscript presents a novel approach to pulse repetition frequency (PRF) selection in order to obtain a uniform distribution of samples at given platform positions. A digital beamforming algorithm is applied on a stack of monostatic repeat-pass images collected by the Sentinel-1 system to test the validity of the PRF selection method. Processed images were thus properly selected to achieve the best merit index measuring the quality of samples distribution. The results show that: (a) the image resulting from beamforming features better azimuth ambiguity-to-signal ratio and (b) the proposed approach for PRF selection allows one to individuate a subset of the available images leading to uniform distribution of samples which can be used to support FF-SAR processing.


Author(s):  
Anju Shukla ◽  
Shishir Kumar ◽  
Harikesh Singh

Computational approaches contribute a significance role in various fields such as medical applications, astronomy, and weather science, to perform complex calculations in speedy manner. Today, personal computers are very powerful but underutilized. Most of the computer resources are idle; 75% of the time and server are often unproductive. This brings the sense of distributed computing, in which the idea is to use the geographically distributed resources to meet the demand of high-performance computing. The Internet facilitates users to access heterogeneous services and run applications over a distributed environment. Due to openness and heterogeneous nature of distributed computing, the developer must deal with several issues like load balancing, interoperability, fault occurrence, resource selection, and task scheduling. Load balancing is the mechanism to distribute the load among resources optimally. The objective of this chapter is to discuss need and issues of load balancing that evolves the research scope. Various load balancing algorithms and scheduling methods are analyzed that are used for performance optimization of web resources. A systematic literature with their solutions and limitations has been presented. The chapter provides a concise narrative of the problems encountered and dimensions for future extension.


Author(s):  
Manoj Himmatrao Devare

The scientist, engineers, and researchers highly need the high-performance computing (HPC) services for executing the energy, engineering, environmental sciences, weather, and life science simulations. The virtual machine (VM) or docker-enabled HPC Cloud service provides the advantages of consolidation and support for multiple users in public cloud environment. Adding the hypervisor on the top of bare metal hardware brings few challenges like the overhead of computation due to virtualization, especially in HPC environment. This chapter discusses the challenges, solutions, and opportunities due to input-output, VMM overheads, interconnection overheads, VM migration problems, and scalability problems in HPC Cloud. This chapter portrays HPC Cloud as highly complex distributed environment consisting of the heterogeneous types of architectures consisting of the different processor architectures, inter-connectivity techniques, the problems of the shared memory, distributed memory, and hybrid architectures in distributed computing like resilience, scalability, check-pointing, and fault tolerance.


Author(s):  
Manoj Himmatrao Devare

The scientist, engineers, and researchers highly need the high-performance computing (HPC) services for executing the energy, engineering, environmental sciences, weather, and life science simulations. The virtual machine (VM) or docker-enabled HPC Cloud service provides the advantages of consolidation and support for multiple users in public cloud environment. Adding the hypervisor on the top of bare metal hardware brings few challenges like the overhead of computation due to virtualization, especially in HPC environment. This chapter discusses the challenges, solutions, and opportunities due to input-output, VMM overheads, interconnection overheads, VM migration problems, and scalability problems in HPC Cloud. This chapter portrays HPC Cloud as highly complex distributed environment consisting of the heterogeneous types of architectures consisting of the different processor architectures, inter-connectivity techniques, the problems of the shared memory, distributed memory, and hybrid architectures in distributed computing like resilience, scalability, check-pointing, and fault tolerance.


2012 ◽  
Vol 532-533 ◽  
pp. 677-681
Author(s):  
Li Qun Luo ◽  
Si Jin He

The advent of cloud is drastically changing the High Performance Computing (HPC) application scenarios. Current virtual machine-based IaaS architectures are not designed for HPC applications. This paper presents a new cloud oriented storage system by constructing a large scale memory grid in a distributed environment in order to support low latency data access of HPC applications. This Cloud Memory model is built through the implementation of a private virtual file system (PVFS) upon virtual operating system (OS) that allows HPC applications to access data in such a way that Cloud Memory can access local disks in the same fashion.


2006 ◽  
Vol 14 (3) ◽  
pp. 157-170
Author(s):  
Aggelos Androulidakis ◽  
Anders Dencker Nielsen ◽  
Andriana Prentza ◽  
Dimitris Koutsouris

1996 ◽  
Vol 30 (3) ◽  
pp. 52-58 ◽  
Author(s):  
Rajmohan Panadiwal ◽  
Andrzej M. Goscinski

2017 ◽  
Vol 31 (19-21) ◽  
pp. 1740050 ◽  
Author(s):  
Wenzheng Zhai ◽  
Yue-Li Hu ◽  
Feng Ran

Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.


Sign in / Sign up

Export Citation Format

Share Document