Cloud Computing Model for Big Geological Data Processing

2013 ◽  
Vol 475-476 ◽  
pp. 306-311 ◽  
Author(s):  
Miao Miao Song ◽  
Zhe Li ◽  
Bin Zhou ◽  
Chao Ling Li

Geological data with phyletic and various, huge and complex data format, the analysis of geological data processing is mainly divided into three parts: Mines forecast, mine evaluation and mine positioning. Traditional geological data analysis model is limited by limited storage space and computational efficiency, and cannot meet the needs of a large number of geological data fast operations. "Big data technology" provides the ideal solution to the vast amounts of geological data management, information extraction, and comprehensive analysis. For mass storage capacity and high-speed computing power that the "big data technology" need, we built an intelligence systems applied to the analysis of geological data based on MapReduce and GPU double parallel processing cloud computing model. For a large number of geological data, using hadoop cluster system to solve the problem of large amounts of data storage, and designing efficient parallel processing method based on GPU (Graphics Processing Units: calculation of Graphics Processing unit), the method was applied to MapReduce framework, finally completing MapReduce and GPU double parallel processing cloud computing model to improve the operation speed of the system. Through theoretical modeling and experimental verification, indicating that the system can meet the analysis of geological data operation precision, the operation data amount and the operation speed.

Mathematics ◽  
2020 ◽  
Vol 8 (10) ◽  
pp. 1781
Author(s):  
SangWoo An ◽  
Seog Chung Seo

With the development of the Internet of Things (IoT) and cloud computing technology, various cryptographic systems have been proposed to protect increasing personal information. Recently, Post-Quantum Cryptography (PQC) algorithms have been proposed to counter quantum algorithms that threaten public key cryptography. To efficiently use PQC in a server environment dealing with large amounts of data, optimization studies are required. In this paper, we present optimization methods for FrodoKEM and NewHope, which are the NIST PQC standardization round 2 competition algorithms in the Graphics Processing Unit (GPU) platform. For each algorithm, we present a part that can perform parallel processing of major operations with a large computational load using the characteristics of the GPU. In the case of FrodoKEM, we introduce parallel optimization techniques for matrix generation operations and matrix arithmetic operations such as addition and multiplication. In the case of NewHope, we present a parallel processing technique for polynomial-based operations. In the encryption process of FrodoKEM, the performance improvements have been confirmed up to 5.2, 5.75, and 6.47 times faster than the CPU implementation in FrodoKEM-640, FrodoKEM-976, and FrodoKEM-1344, respectively. In the encryption process of NewHope, the performance improvements have been shown up to 3.33 and 4.04 times faster than the CPU implementation in NewHope-512 and NewHope-1024, respectively. The results of this study can be used in the IoT devices server or cloud computing service server. In addition, the results of this study can be utilized in image processing technologies such as facial recognition technology.


2019 ◽  
Vol 3 (2) ◽  
pp. 152
Author(s):  
Xianglan Wu

<p>In today's society, the rise of the Internet and rapid development make every day produce a huge amount of data. Therefore, the traditional data processing mode and data storage can not be fully analyzed and mined these data. More and more new information technologies (such as cloud computing, virtualization and big data, etc.) have emerged and been applied, the network has turned from informationization to intelligence, and campus construction has ushered in the stage of smart campus construction.The construction of intelligent campus refers to big data and cloud computing technology, which improves the informatization service quality of colleges and universities by integrating, storing and mining huge data.</p>


Author(s):  
Ganesh Chandra Deka

NoSQL databases are designed to meet the huge data storage requirements of cloud computing and big data processing. NoSQL databases have lots of advanced features in addition to the conventional RDBMS features. Hence, the “NoSQL” databases are popularly known as “Not only SQL” databases. A variety of NoSQL databases having different features to deal with exponentially growing data-intensive applications are available with open source and proprietary option. This chapter discusses some of the popular NoSQL databases and their features on the light of CAP theorem.


Author(s):  
Rajganesh Nagarajan ◽  
Ramkumar Thirunavukarasu

In this chapter, the authors consider different categories of data, which are processed by the big data analytics tools. The challenges with respect to the big data processing are identified and a solution with the help of cloud computing is highlighted. Since the emergence of cloud computing is highly advocated because of its pay-per-use concept, the data processing tools can be effectively deployed within cloud computing and certainly reduce the investment cost. In addition, this chapter talks about the big data platforms, tools, and applications with data visualization concept. Finally, the applications of data analytics are discussed for future research.


Author(s):  
Forest Jay Handford

The number of tools available for Big Data processing have grown exponentially as cloud providers have introduced solutions for businesses that have little or no money for capital expenditures. The chapter starts by discussing historic data tools and the evolution to those of today. With Cloud Computing, the need for upfront costs has been removed, costs are continuing to fall and costs can be negotiated. This chapter reviews the current types of Big Data tools, and how they evolved. To give readers an idea of costs, the chapter shows example costs (in today's market) for a sampling of the tools and relative cost comparisons of the other tools like the Grid tools used by the government, scientific communities and academic communities. Readers will take away from this chapter an understanding of what tools work best for several scenarios and how to select cost effective tools (even tools that are unknown today).


Author(s):  
Amitava Choudhury ◽  
Kalpana Rangra

Data type and amount in human society is growing at an amazing speed, which is caused by emerging new services such as cloud computing, internet of things, and location-based services. The era of big data has arrived. As data has been a fundamental resource, how to manage and utilize big data better has attracted much attention. Especially with the development of the internet of things, how to process a large amount of real-time data has become a great challenge in research and applications. Recently, cloud computing technology has attracted much attention to high performance, but how to use cloud computing technology for large-scale real-time data processing has not been studied. In this chapter, various big data processing techniques are discussed.


Sign in / Sign up

Export Citation Format

Share Document