scholarly journals GATECloud.net: a platform for large-scale, open-source text processing on the cloud

Author(s):  
Valentin Tablan ◽  
Ian Roberts ◽  
Hamish Cunningham ◽  
Kalina Bontcheva

Cloud computing is increasingly being regarded as a key enabler of the ‘democratization of science’, because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research—GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost–benefit analysis and usage evaluation.

2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Xiao Song ◽  
Yaofei Ma ◽  
Da Teng

A maturing and promising technology, Cloud computing can benefit large-scale simulations by providing on-demand, anywhere simulation services to users. In order to enable multitask and multiuser simulation systems with Cloud computing, Cloud simulation platform (CSP) was proposed and developed. To use key techniques of Cloud computing such as virtualization to promote the running efficiency of large-scale military HLA systems, this paper proposes a new type of federate container, virtual machine (VM), and its dynamic migration algorithm considering both computation and communication cost. Experiments show that the migration scheme effectively improves the running efficiency of HLA system when the distributed system is not saturated.


Author(s):  
Shruthi P. ◽  
Nagaraj G. Cholli

Cloud Computing is the environment in which several virtual machines (VM) run concurrently on physical machines. The cloud computing infrastructure hosts multiple cloud service segments that communicate with each other using the interfaces. This creates distributed computing environment. During operation, the software systems accumulate errors or garbage that leads to system failure and other hazardous consequences. This status is called software aging. Software aging happens because of memory fragmentation, resource consumption in large scale and accumulation of numerical error. Software aging degrads the performance that may result in system failure. This happens because of premature resource exhaustion. This issue cannot be determined during software testing phase because of the dynamic nature of operation. The errors that cause software aging are of special types. These errors do not disturb the software functionality but target the response time and its environment. This issue is to be resolved only during run time as it occurs because of the dynamic nature of the problem. To alleviate the impact of software aging, software rejuvenation technique is being used. Rejuvenation process reboots the system or re-initiates the softwares. This avoids faults or failure. Software rejuvenation removes accumulated error conditions, frees up deadlocks and defragments operating system resources like memory. Hence, it avoids future failures of system that may happen due to software aging. As service availability is crucial, software rejuvenation is to be carried out at defined schedules without disrupting the service. The presence of Software rejuvenation techniques can make software systems more trustworthy. Software designers are using this concept to improve the quality and reliability of the software. Software aging and rejuvenation has generated a lot of research interest in recent years. This work reviews some of the research works related to detection of software aging and identifies research gaps.


Cloud computing technologies and service models are attractive to scientific computing users due to the ability to get on-demand access to resources as well as the ability to control the software environment. Scientific computing researchers and resource providers servicing these users are considering the impact of new models and technologies. SaaS solutions like Globus Online and IaaS solutions such as Nimbus Infrastructure and OpenNebula accelerate the discovery of science by helping scientists to conduct advanced and large-scale science. This chapter describes how cloud is helping researchers to accelerate scientific discovery by transforming manual and difficult tasks into the cloud.


Author(s):  
Linda Little ◽  
Pam Briggs

Certain privacy principles have been established by industry, (e.g. USCAM, 2006). Over the past two years, we have been trying to understand whether such principles reflect the concerns of the ordinary citizen. We have developed a method of enquiry which displays a rich context to the user in order to elicit more detailed information about those privacy factors that underpin our acceptance of ubiquitous computing. To investigate use and acceptance Videotaped Activity Scenarios specifically related to the exchange of health, financial, shopping and e-voting information and a large scale survey were used. We present a detailed analysis of user concerns firstly in terms of a set of constructs that might reflect user-generated privacy principles; secondly those factors likely to play a key role in an individual’s cost-benefit analysis and thirdly, longer-term concerns of the citizen in terms of the impact of new technologies on social engagement and human values.


2017 ◽  
Vol 10 (13) ◽  
pp. 162
Author(s):  
Amey Rivankar ◽  
Anusooya G

Cloud computing is the latest trend in large-scale distributed computing. It provides diverse services on demand to distributive resources such asservers, software, and databases. One of the challenging problems in cloud data centers is to manage the load of different reconfigurable virtual machines over one another. Thus, in the near future of cloud computing field, providing a mechanism for efficient resource management will be very significant. Many load balancing algorithms have been already implemented and executed to manage the resources efficiently and adequately. The objective of this paper is to analyze shortcomings of existing algorithms and implement a new algorithm which will give optimized load balancingresult.


2020 ◽  
Vol 2020 ◽  
pp. 1-16
Author(s):  
Ali Rahim Taleqani ◽  
Chrysafis Vogiatzis ◽  
Jill Hough

In this work, we investigate a new paradigm for dock-less bike sharing. Recently, it has become essential to accommodate connected and free-floating bicycles in modern bike-sharing operations. This change comes with an increase in the coordination cost, as bicycles are no longer checked in and out from bike-sharing stations that are fully equipped to handle the volume of requests; instead, bicycles can be checked in and out from virtually anywhere. In this paper, we propose a new framework for combining traditional bike stations with locations that can serve as free-floating bike-sharing stations. The framework we propose here focuses on identifying highly centralized k-clubs (i.e., connected subgraphs of restricted diameter). The restricted diameter reduces coordination costs as dock-less bicycles can only be found in specific locations. In addition, we use closeness centrality as this metric allows for quick access to dock-less bike sharing while, at the same time, optimizing the reach of service to bikers/customers. For the proposed problem, we first derive its computational complexity and show that it is NP-hard (by reduction from the 3-SATISFIABILITY problem), and then provide an integer programming formulation. Due to its computational complexity, the problem cannot be solved exactly in a large-scale setting, as is such of an urban area. Hence, we provide a greedy heuristic approach that is shown to run in reasonable computational time. We also provide the presentation and analysis of a case study in two cities of the state of North Dakota: Casselton and Fargo. Our work concludes with the cost-benefit analysis of both models (docked vs. dockless) to suggest the potential advantages of the proposed model.


2013 ◽  
Vol 662 ◽  
pp. 957-960 ◽  
Author(s):  
Jing Liu ◽  
Xing Guo Luo ◽  
Xing Ming Zhang ◽  
Fan Zhang

Cloud computing is an emerging high performance computing environment with a large scale, heterogeneous collection of autonomous systems and flexible computational architecture. The performance of the scheduling system influences the cost benefit of this computing paradigm. To reduce the energy consumption and improve the profit, a job scheduling model based on the particle swarm optimization(PSO) algorithm is established for cloud computing. Based on open source cloud computing simulation platform CloudSim, compared to GA and random scheduling algorithms, the results show that the proposed algorithm can obtain a better solution concerning the energy cost and profit.


Sign in / Sign up

Export Citation Format

Share Document