Development of user-customized service for seasonal forecast data processing and extracting based on Cloud Computing

2021 ◽  
Vol 22 (10) ◽  
pp. 1543-1550
Author(s):  
Jeong-Min Han ◽  
Chang-Mook Lim
2015 ◽  
Vol 51 (5) ◽  
pp. 1041-1048 ◽  
Author(s):  
V. P. Potapov ◽  
V. N. Oparin ◽  
O. L. Giniyatullina ◽  
I. E. Kharlampenkov

2014 ◽  
Vol 543-547 ◽  
pp. 3573-3576
Author(s):  
Yuan Jun Zou

Cloud computing, networking and other high-end computer data processing technology are the important contents of eleven-five development planning in China. They have developed rapidly in recent years in the field of engineering. In this paper, we combine parallel computing with the collaborative simulation principle, design a cloud computing platform, establish the mathematical model of cloud data processing and parallel computing algorithm, and verify the applicability of algorithm through the numerical simulation. Through numerical calculation, cloud computing platform can be divided into complex grids, and the transmission speed is fast, which is eight times than the finite difference method. The mesh is meticulous, which reaches millions. Convergence error is minimum, only 0.001. The calculation accuracy is up to 98.36%.


Author(s):  
Daniel Warneke

In recent years, so-called Infrastructure as a Service (IaaS) clouds have become increasingly popular as a flexible and inexpensive platform for ad-hoc parallel data processing. Major players in the cloud computing space like Amazon EC2 have already recognized this trend and started to create special offers which bundle their compute platform with existing software frameworks for these kinds of applications. However, the data processing frameworks which are currently used in these offers have been designed for static, homogeneous cluster systems and do not support the new features which distinguish the cloud platform. This chapter examines the characteristics of IaaS clouds with special regard to massively-parallel data processing. The author highlights use cases which are currently poorly supported by existing parallel data processing frameworks and explains how a tighter integration between the processing framework and the underlying cloud system can help to lower the monetary processing cost for the cloud customer. As a proof of concept, the author presents the parallel data processing framework Nephele, and compares its cost efficiency against the one of the well-known software Hadoop.


Author(s):  
Ganesh Chandra Deka

NoSQL databases are designed to meet the huge data storage requirements of cloud computing and big data processing. NoSQL databases have lots of advanced features in addition to the conventional RDBMS features. Hence, the “NoSQL” databases are popularly known as “Not only SQL” databases. A variety of NoSQL databases having different features to deal with exponentially growing data-intensive applications are available with open source and proprietary option. This chapter discusses some of the popular NoSQL databases and their features on the light of CAP theorem.


Author(s):  
Rajganesh Nagarajan ◽  
Ramkumar Thirunavukarasu

In this chapter, the authors consider different categories of data, which are processed by the big data analytics tools. The challenges with respect to the big data processing are identified and a solution with the help of cloud computing is highlighted. Since the emergence of cloud computing is highly advocated because of its pay-per-use concept, the data processing tools can be effectively deployed within cloud computing and certainly reduce the investment cost. In addition, this chapter talks about the big data platforms, tools, and applications with data visualization concept. Finally, the applications of data analytics are discussed for future research.


Author(s):  
Forest Jay Handford

The number of tools available for Big Data processing have grown exponentially as cloud providers have introduced solutions for businesses that have little or no money for capital expenditures. The chapter starts by discussing historic data tools and the evolution to those of today. With Cloud Computing, the need for upfront costs has been removed, costs are continuing to fall and costs can be negotiated. This chapter reviews the current types of Big Data tools, and how they evolved. To give readers an idea of costs, the chapter shows example costs (in today's market) for a sampling of the tools and relative cost comparisons of the other tools like the Grid tools used by the government, scientific communities and academic communities. Readers will take away from this chapter an understanding of what tools work best for several scenarios and how to select cost effective tools (even tools that are unknown today).


Author(s):  
Amitava Choudhury ◽  
Kalpana Rangra

Data type and amount in human society is growing at an amazing speed, which is caused by emerging new services such as cloud computing, internet of things, and location-based services. The era of big data has arrived. As data has been a fundamental resource, how to manage and utilize big data better has attracted much attention. Especially with the development of the internet of things, how to process a large amount of real-time data has become a great challenge in research and applications. Recently, cloud computing technology has attracted much attention to high performance, but how to use cloud computing technology for large-scale real-time data processing has not been studied. In this chapter, various big data processing techniques are discussed.


Sign in / Sign up

Export Citation Format

Share Document