Grid Computing Based Middleware for Labor and Social Security Data Storage Resources Discovery

2011 ◽  
Vol 148-149 ◽  
pp. 1425-1428
Author(s):  
Jian Yu

Labor and social security social insurance and employment related to large groups of services. The labor security department of information technology often combined with computational grid computing systems. This paper presents a new kind of data grid middleware for data storage resources discovery and dynamic management in labor and social security resources environment. The architecture of grid storage resources discovery and dynamic management is presented for discovering data storage resources from the different computer organizational structure. The middleware can realize the necessary functions for the ultra large-scale application in data grid environment. It could be applied to the ultra large-scale data storage management in grid computing in next generation labor and social security resources environment.

Author(s):  
Anthony E. Solomonides

Grid computing is a new technology enhancing services already offered by the Internet offering rapid computation, large-scale data storage, and flexible collaboration by harnessing together the power of a large number of commodity computers or clusters of basic machines. The grid has been used in a number of ambitious medical and healthcare applications. While these have been restricted to the research domain, there is a great deal of interest in real applications. There is some tension between the spirit of the grid paradigm and the requirements of healthcare applications. The grid maximises its flexibility and minimises its overheads by requesting computations to be carried out at the most appropriate node in the network; it stores data at the most convenient node according to performance criteria. A healthcare organization is required to maintain control of its patient data and be accountable for its use at all times. Despite this apparent conflict, certain characteristics of grids help to resolve the problem: “grid services” may provide a solution by negotiating ethical, legal, and regulatory compliance according to agreed policy.


2011 ◽  
Vol 3 (2) ◽  
pp. 44-58 ◽  
Author(s):  
Meriem Meddeber ◽  
Belabbas Yagoubi

A computational grid is a widespread computing environment that provides huge computational power for large-scale distributed applications. One of the most important issues in such an environment is resource management. Task assignment as a part of resource management has a considerable effect on the grid middleware performance. In grid computing, task execution time is dependent on the machine to which it is assigned, and task precedence constraints are represented by a directed acyclic graph. This paper proposes a hybrid assignment strategy of dependent tasks in Grids which integrate static and dynamic assignment technologies. Grid computing is considered a set of clusters formed by a set of computing elements and a cluster manager. The main objective is to arrive at a method of task assignment that could achieve minimum response time and reduce the transfer cost, inducing by the tasks transfer respecting the dependency constraints.


2020 ◽  
Vol 17 (1) ◽  
pp. 43-63
Author(s):  
A. Sathish ◽  
S. Ravimaran ◽  
S. Jerald Nirmal Kumar

With the rapid developments occurring in cloud computing and services, there has been a growing trend of using the cloud for large-scale data storage. This has led to a major security dispute on data handling. Thus, the process can be overcome by utilizing an efficient shielded access on a key propagation (ESAKP) technique along with an adaptive optimization algorithm for password generation and performing double permutation. The password generation is done by adaptive ant lion optimization (AALO) which tackles the problem of ineffiency. This build has stronger security which needs an efficient selection property by eliminating the worst fit in each iteration. The optimized password is utilized by an adaptive vignere cipher for efficient key generation in which adaptiveness is employed to prevent the dilemma of choosing the first letter of alphabet which in turn reduces the computation time and improves the security. Additionally, there is a need to encrypte the symmetric key asymmetrically with a Elliptic Curve-Diffie Hellman algorithm (EC-DH) with a double stage permutation which produces a scrambling form of data adding security to the data.


2015 ◽  
Vol 4 (1) ◽  
pp. 163 ◽  
Author(s):  
Alireza Saleh ◽  
Reza Javidan ◽  
Mohammad Taghi FatehiKhajeh

<p>Nowadays, scientific applications generate a huge amount of data in terabytes or petabytes. Data grids currently proposed solutions to large scale data management problems including efficient file transfer and replication. Data is typically replicated in a Data Grid to improve the job response time and data availability. A reasonable number and right locations for replicas has become a challenge in the Data Grid. In this paper, a four-phase dynamic data replication algorithm based on Temporal and Geographical locality is proposed. It includes: 1) evaluating and identifying the popular data and triggering a replication operation when the popularity data passes a dynamic threshold; 2) analyzing and modeling the relationship between system availability and the number of replicas, and calculating a suitable number of new replicas; 3) evaluating and identifying the popular data in each site, and placing replicas among them; 4) removing files with least cost of average access time when encountering insufficient space for replication. The algorithm was tested using a grid simulator, OptorSim developed by European Data Grid Projects. The simulation results show that the proposed algorithm has better performance in comparison with other algorithms in terms of job execution time, effective network usage and percentage of storage filled.</p>


Author(s):  
Oshin Sharma ◽  
Anusha S.

The emerging trends in fog computing have increased the interests and focus in both industry and academia. Fog computing extends cloud computing facilities like the storage, networking, and computation towards the edge of networks wherein it offloads the cloud data centres and reduces the latency of providing services to the users. This paradigm is like cloud in terms of data, storage, application, and computation services, except with a fundamental difference: it is decentralized. Furthermore, these fog systems can process huge amounts of data locally and can be installed on hardware of different types. These characteristics make fog suitable for time- and location-based applications like internet of things (IoT) devices which can process large amounts of data. In this chapter, the authors present fog data streaming, its architecture, and various applications.


1989 ◽  
Vol 103 (1) ◽  
pp. 165-171 ◽  
Author(s):  
A. W. Hill ◽  
J. A. Leigh

SUMMARYA simple and reproducible typing system based on restriction fragment size of chromosomal DNA was developed to compare isolates ofStreptococcus uberisobtained from the bovine mammary gland. The endonuclease giving the most useful restriction patterns wasHindIII, although seven other endonucleases (Bgl1,EcoR1,Not1,Pst1,Sfi1,Sma1,Xba1) were also tested in the system. An image analyser was used to obtain a densitometric scan and a graphic display of the restriction patterns. Such a system will allow large scale data storage for future computer-aided comparison.


Sign in / Sign up

Export Citation Format

Share Document