MITIGATION OF LARGE-SCALE RDF DATA LOADING WITH THE EMPLOYMENT OF A CLOUD COMPUTING SERVICE

2021 ◽  
Vol 12 (1) ◽  
pp. 74-83
Author(s):  
Manjunatha S. ◽  
Suresh L.

Data center is a cost-effective infrastructure for storing large volumes of data and hosting large-scale service applications. Cloud computing service providers are rapidly deploying data centers across the world with a huge number of servers and switches. These data centers consume significant amounts of energy, contributing to high operational costs. Thus, optimizing the energy consumption of servers and networks in data centers can reduce operational costs. In a data center, power consumption is mainly due to servers, networking devices, and cooling systems, and an effective energy-saving strategy is to consolidate the computation and communication into a smaller number of servers and network devices and then power off as many unneeded servers and network devices as possible.


2014 ◽  
Vol 556-562 ◽  
pp. 6262-6265 ◽  
Author(s):  
Yuan Zhang ◽  
Lei Huang

Cloud computing is developing rapidly in recent years. Everyone is talking about cloud computing that provide large scale service to replace computers and software. Cloud computing has already become the development trend of present IT circles. This paper explains the basics of Cloud computing, analyzes the difference of cloud computing services, and points out the characteristics of the cloud computing services.


2013 ◽  
Vol 756-759 ◽  
pp. 2386-2390
Author(s):  
Yuan Yuan Guo ◽  
Jing Li ◽  
Xin Chun Liu ◽  
Wei Wei Wang

With the quick development of information science, it becomes much harder to deal with a large scale of data. In this case, cloud computing begins to become a hot topic as a new computing model because of its good scalability. It enables customers to acquire and release computing resources from and to the cloud computing service providers according to current workload. The scaling ability is achieved by system automatically according to auto scaling policies reserved by customers in advance, and it can greatly decrease users operating burden. In this paper, we proposed a new architecture of auto-scaling system, used auto-scaling technology on batch jobs based system and considered tasks deadlines and VM setup time as affecting factors on auto-scaling policy besides substrate resource utilities.


2013 ◽  
Vol 385-386 ◽  
pp. 1708-1712
Author(s):  
Xiao Ping Jiang ◽  
Teng Jiang ◽  
Tao Zhang ◽  
Cheng Hua Li

By combining LVS cluster architecture and could computing technology, system architecture of the cloud computing service platform is proposed. Cloud computing technology is suitable to support large-scale applications with flash crowds by support elastic amounts of bandwidth and storage resource etc. But traditional algorithms of load balancing provided by LVS are unsuitable for the proposed service platform, because these algorithms are designed for static server resource provided by traditional cluster technology. Taking both the overall utilization rate of server resources and the active connections of the server into counter, an adaptive adjustable load balancing algorithms (Least Comprehensive Utilization and Connection Scheduling algorithm, called LUCU) is proposed in this paper. According the utilization of cloud resource and the users demand, automatic switching between Round Robin (RR) algorithm and LUCU algorithm is achieved. When the cloud capacities are not able to meet the instantaneous demands, LUCU is chosen instead of RR. The proposed platform and algorithm are verified and evaluated using large-scare simulation experiments. The test results show that the equilibrium load is nearly achieved by adopting the proposed algorithms.


Author(s):  
Jinn-Shing Cheng ◽  
Echo Huang ◽  
Chuan-Lang Lin

Due to the constant performance upgrades and regular price reductions of mobile devices in recent years, users are able to take advantage of the various  devices to obtain digital content regardless of the limitations of time and place. The increasing use of e-books has stimulated new e-learning approaches. This research project developed an e-book hub service on a cloud computing platform in order to overcome the limitations of computing capability and storage capacity that are inherent in many mobile devices. The e-book hub service also allows users to automatically adjust the rendering of multimedia pages at different resolutions on terminal units such as smartphones, tablets, PCs, and so forth. We implemented an e-book hub service on OpenStack, which is a free and open-source cloud computing platform supported by multiple large firms. The OpenStack platform provides a large-scale distributed computing environment that allows users to build their own cloud systems in a public, private, or hybrid environment. Our e-book hub system offers content providers an easy-to-use cloud computing service with unlimited storage capacity, fluent playback, high usability and scalability, and high security characteristics to produce, convert, and manage their e-books. The integration of information and communication technologies has led the traditional publishing industry to new horizons with abundant digital content publications. Results from this study may help content providers create a new service model with increased profitability and enable mobile device users to easily get digital content, thereby achieving the goal of e-learning.<br /><br />


2021 ◽  
Vol 13 (1) ◽  
pp. 9-17
Author(s):  
Mohamad Iqbal Suriansyah ◽  
Iyan Mulyana ◽  
Junaidy Budi Sanger ◽  
Sandi Winata

Analyzing compute functions by utilizing the IAAS model for private cloud computing services in packstack development is one of the large-scale data storage solutions. Problems that often occur when implementing various applications are the increased need for server resources, the monitoring process, performance efficiency, time constraints in building servers and upgrading hardware. These problems have an impact on long server downtime. The development of private cloud computing technology could become a solution to the problem. This research employed Openstack and Packstack by applying one server controller node and two servers compute nodes. Server administration with IAAS and self-service approaches made scalability testing simpler and time-efficient. The resizing of the virtual server (instance) that has been carried out in a running condition shows that the measurement of the overhead value in private cloud computing is more optimal with a downtime of 16 seconds.


Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 177
Author(s):  
Amin Jula ◽  
Elankovan A. Sundararajan ◽  
Zalinda Othman ◽  
Narjes Khatoon Naseri

In this paper, a novel high-performance and low-cost operator is proposed for the imperialist competitive algorithm (ICA). The operator, inspired by a sociopolitical movement called the color revolution that has recently arisen in some countries, is referred to as the color revolution operator (CRO). The improved ICA with CRO, denoted as ICACRO, is significantly more efficient than the ICA. On the other hand, cloud computing service composition is a high-dimensional optimization problem that has become more prominent in recent years due to the unprecedented increase in both the number of services in the service pool and the number of service providers. In this study, two different types of ICACRO, one that applies the CRO to all countries of the world (ICACRO-C) and one that applies the CRO solely to imperialist countries (ICACRO-I), were used for service time-cost optimization in cloud computing service composition. The ICACRO was evaluated using a large-scale dataset and five service time-cost optimization problems with different difficulty levels. Compared to the basic ICA and niching PSO, the experimental and statistical tests demonstrate that the ability of the ICACRO to approach an optimal solution is considerably higher and that the ICACRO can be considered an efficient and scalable approach. Furthermore, the ICACRO-C is stronger than the ICACRO-I in terms of the solution quality with respect to execution time. However, the differences are negligible when solving large-scale problems.


2018 ◽  
Vol 31 (5-6) ◽  
pp. 227-233
Author(s):  
Weitao Wang ◽  
◽  
Baoshan Wang ◽  
Xiufen Zheng ◽  

Sign in / Sign up

Export Citation Format

Share Document