scholarly journals A big data MapReduce framework for fault diagnosis in cloud-based manufacturing

2016 ◽  
Vol 54 (23) ◽  
pp. 7060-7073 ◽  
Author(s):  
Ajay Kumar ◽  
Ravi Shankar ◽  
Alok Choudhary ◽  
Lakshman S. Thakur
2021 ◽  
pp. 016555152110137
Author(s):  
N.R. Gladiss Merlin ◽  
Vigilson Prem. M

Large and complex data becomes a valuable resource in biomedical discovery, which is highly facilitated to increase the scientific resources for retrieving the helpful information. However, indexing and retrieving the patient information from the disparate source of big data is challenging in biomedical research. Indexing and retrieving the patient information from big data is performed using the MapReduce framework. In this research, the indexing and retrieval of information are performed using the proposed Jaya-Sine Cosine Algorithm (Jaya–SCA)-based MapReduce framework. Initially, the input big data is forwarded to the mapper randomly. The average of each mapper data is calculated, and these data are forwarded to the reducer, where the representative data are stored. For each user query, the input query is matched with the reducer, and thereby, it switches over to the mapper for retrieving the matched best result. The bilevel matching is performed while retrieving the data from the mapper based on the distance between the query. The similarity measure is computed based on the parametric-enabled similarity measure (PESM), cosine similarity and the proposed Jaya–SCA, which is the integration of the Jaya algorithm and the SCA. Moreover, the proposed Jaya–SCA algorithm attained the maximum value of F-measure, recall and precision of 0.5323, 0.4400 and 0.6867, respectively, using the StatLog Heart Disease dataset.


2018 ◽  
Vol 394 (4) ◽  
pp. 042116 ◽  
Author(s):  
Lei Wang ◽  
Lingling Shang ◽  
Mengchao Ma ◽  
Zhiguang Ma

2019 ◽  
Vol 8 (2S11) ◽  
pp. 3606-3611

Big data privacy has assumed importance as the cloud computing became a phenomenal success in providing a remote platform for sharing computing resources without geographical and time restrictions. However, the privacy concerns on the big data being outsourced to public cloud storage are still exist. Different anonymity or sanitization techniques came into existence for protecting big data from privacy attacks. In our prior works, we have proposed a misusability probability based metric to know the probable percentage of misusability. We additionally planned a system that suggests level of sanitization before actually applying privacy protection to big data. It was based on misusability probability. In this paper, our focus is on further evaluation of our misuse probability based sanitization of big data approach by defining an algorithm which willanalyse the trade-offs between misuse probability and level of sanitization. It throws light into the proposed framework and misusability measure besides evaluation of the framework with an empirical study. Empirical study is made in public cloud environment with Amazon EC2 (compute engine), S3 (storage service) and EMR (MapReduce framework). The experimental results revealed the dynamics of the trade-offs between them. The insights help in making well informed decisions while sanitizing big data to ensure that it is protected without losing utility required.


2018 ◽  
Vol 90 (8-9) ◽  
pp. 1221-1233 ◽  
Author(s):  
Jia Si ◽  
Yibin Li ◽  
Sile Ma

2018 ◽  
Vol 153 ◽  
pp. 176-192 ◽  
Author(s):  
D. Martín ◽  
M. Martínez-Ballesteros ◽  
D. García-Gil ◽  
J. Alcalá-Fdez ◽  
F. Herrera ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document