MapReduce framework based gridlet allocation technique in computational grid

2021 ◽  
Vol 92 ◽  
pp. 107131
Author(s):  
Rajeswari D. ◽  
Prakash M. ◽  
Ramamoorthy S. ◽  
Sudhakar S.
Author(s):  
Fedor Gippius ◽  
Fedor Gippius ◽  
Stanislav Myslenkov ◽  
Stanislav Myslenkov ◽  
Elena Stoliarova ◽  
...  

This study is focused on the alterations and typical features of the wind wave climate of the Black Sea’s coastal waters since 1979 till nowadays. Wind wave parameters were calculated by means of the 3rd-generation numerical spectral wind wave model SWAN, which is widely used on various spatial scales – both coastal waters and open seas. Data on wind speed and direction from the NCEP CFSR reanalysis were used as forcing. The computations were performed on an unstructured computational grid with cell size depending on the distance from the shoreline. Modeling results were applied to evaluate the main characteristics of the wind wave in various coastal areas of the sea.


2013 ◽  
Vol 311 ◽  
pp. 158-163 ◽  
Author(s):  
Li Qin Huang ◽  
Li Qun Lin ◽  
Yan Huang Liu

MapReduce framework of cloud computing has an effective way to achieve massive text categorization. In this paper a distributed parallel text training algorithm in cloud computing environment based on multi-class Support Vector Machines(SVM) is designed. In cloud computing environment Map tasks realize distributing various types of samples and Reduce tasks realize the specific SVM training. Experimental results show that the execution time of text training decreases with the number of Reduce tasks increasing. Also a parallel text classifying based on cloud computing is designed and implemented, which classify the unknown type texts. Experimental results show that the speed of text classifying increases with the number of Map tasks increasing.


2021 ◽  
pp. 016555152110137
Author(s):  
N.R. Gladiss Merlin ◽  
Vigilson Prem. M

Large and complex data becomes a valuable resource in biomedical discovery, which is highly facilitated to increase the scientific resources for retrieving the helpful information. However, indexing and retrieving the patient information from the disparate source of big data is challenging in biomedical research. Indexing and retrieving the patient information from big data is performed using the MapReduce framework. In this research, the indexing and retrieval of information are performed using the proposed Jaya-Sine Cosine Algorithm (Jaya–SCA)-based MapReduce framework. Initially, the input big data is forwarded to the mapper randomly. The average of each mapper data is calculated, and these data are forwarded to the reducer, where the representative data are stored. For each user query, the input query is matched with the reducer, and thereby, it switches over to the mapper for retrieving the matched best result. The bilevel matching is performed while retrieving the data from the mapper based on the distance between the query. The similarity measure is computed based on the parametric-enabled similarity measure (PESM), cosine similarity and the proposed Jaya–SCA, which is the integration of the Jaya algorithm and the SCA. Moreover, the proposed Jaya–SCA algorithm attained the maximum value of F-measure, recall and precision of 0.5323, 0.4400 and 0.6867, respectively, using the StatLog Heart Disease dataset.


Sign in / Sign up

Export Citation Format

Share Document