A novel approach to cloud computing: Infrastructure as a service security

Author(s):  
Sarika Jain ◽  
Prachi Tyagi ◽  
Siddharth Kalra
2014 ◽  
Vol 496-500 ◽  
pp. 2003-2006 ◽  
Author(s):  
Yu Qing Shi ◽  
Yue Long Zhu

The concept of Cloud Computing dates from the 1960s, when computer scientist John McCarthy proposed the idea of computation being delivered as a public utility. Infrastructure as a Service is one of the most important modules of Cloud Computing. In this paper, we propose a hierarchical and architecture of Cloud Computing Infrastructure as a Service frameworks. Our hierarchical and architecture consists of five main layers: resource layer, service layer, middleware layer, management layer, control layer. We study various Cloud Computing systems. Then we provide a Cloud Computing Infrastructure as a Service architectural framework that bases on the hierarchical architecture. Finally, we introduce a detailed description of each layer and define dependencies between the layers and components.


2014 ◽  
Vol 10 (1) ◽  
pp. 165
Author(s):  
Wikranta Arsa ◽  
Khabib Mustofa

AbstrakMesin server merupakan salah satu penunjang dan komponen utama yang harus ada dalam mengembangkan suatu karya ilmiah dengan berbasis web. Mahalnya server menjadi kendala mahasiswa/mahasiswi dalam menghasilkan suatu karya ilmiah. Konfigurasi server yang dapat dilakukan dimana saja dan kapan saja menjadi sebuah keinginan mendasar, selain pemesanan mesin yang mudah, cepat dan fleksibel. Untuk itu  diperlukan sebuah sistem yang dapat menangani permasalahan tersebut. Cloud computing  dengan layanan Infrastructure-As-A-Serveice (IAAS) dapat menyediakan sebuah infrastruktur yang handal. Untuk mengetahui kinerja sistem diperlukan suatu analisis performance antara server cloud (instance) dengan server konvensional. Hasil penelitian dari analisis kinerja private cloud computing dengan layanan Infrastructure-As-A-Service (IAAS) ini menunjukkan bahwa perbandingan kinerja satu server cloud atau server virtual cloud dengan satu server konvensional tidak jauh berbeda namun akan terlihat perbedaan kinerja yang signifikan jika dalam satu server node terdapat lebih sari satu server virtual dan sistem ini memberikan tingkat penggunaan resource server yang lebih maksimal.Kata kunci—Cloud Computing, Infrastructure As-A-Service (IAAS), analisis Performance. Abstract Server machine is one of the main components in supporting and developing a web-based scientific work. The high price of the server to be the main obstacle in the student produced a scholarly work. Server configuration that can be done anywhere and anytime to be a fundamental desire, in addition to the booking engine is easy, fast, and flexible is also highly desirable. For that we need a system that can handle these problems. Cloud computing with Infrastructure-As-A-Serveice (IAAS) can provide a reliable infrastructure. To determine the performance of the system, we required a performance analysis of cloud server between conventional servers. Results of performance analysis of private cloud computing with Infrastructure-As-A-Service (IAAS) indicate that the cloud server performance comparison with conventional server is not too much different and the system resource usage level servers provide more leverage. Keyword—Cloud Computing, Infrastructure As-A-Service (IAAS), Performance Analysis. 


2018 ◽  
Vol 3 (1) ◽  
pp. 19 ◽  
Author(s):  
Matheus Alvian Wikanargo ◽  
Novian Adi Prasetyo ◽  
Angelina Pramana Thenata

AbstrakTeknologi cloud computing pada era sekarang berkembang pesat. Penerapan teknologi cloud computing sudah merambah ke berbagai industri, mulai dari perusahaan besar hingga perusahaan kecil dan menengah. Perambahan cloud computing di perindustrian berupa implementasi ke dalam sistem ERP. Namun, penetrasi teknologi ini dalam lingkup perusahaan kecil dan menengah (UKM) masih belum sekuat perusahaan besar. Penerapan ERP berbasis cloud computing yang masih tergolong baru tentu memiliki keuntungan dan penghambat yang mempengaruhi kinerja perusahaan. Hal tersebut menjadi salah satu pertimbangan UKM masih enggan menggunakan teknologi ini. Penelitian ini akan menganalisis framework yang paling sesuai untuk UKM dalam menerapkan sistem ERP berbasis cloud computing. Framework yang dianalisa yaitu Software as a Service (SaaS), Infrastructure as a Service (IaaS), dan Platform as as Service (PaaS). Ketiga framework ini akan dibandingkan menggunakan metode studi literatur. Tolak ukur yang menjadi acuan untuk perbandingan adalah Compatibility, Cost, Flexibility, Human Resource, Implementation, Maintenance, Security, dan Usability. Faktor-faktor tersebut akan diukur keuntungan dan penghambatnya jika diterapkan dalam SME. Hasil dari penilitian ini adalah Framework SaaS yang paling cocok untuk diterapkan pada perusahaan kecil dan menengah. Kata kunci— Cloud Computing, UKM, SaaS, IaaS, PaaS 


Compiler ◽  
2015 ◽  
Vol 4 (2) ◽  
Author(s):  
Hero Wintolo ◽  
Lalu Septian Dwi Paradita

Cloud computing, one form of information technologies are widely used in the field of computer networks or the Internet. Cloud computing consists of computer hardware, computer networking devices, and computer software, the cloud computing there are three services provided include (SaaS) Software as a Service (PaaS) Platform as a Service, and (IaaS) Infrastructure as a Service. Application cloud computing services in the wake of this system is a service-based data storage infrastructure as a service by using android smartphone as a storage medium, which utilizes FTP Server which is already available on the smartphone. This certainly supports the easy storage of data that utilize various types of internal and external storage on smartphones that serves as a storage server. In addition to the functions of storage available, this service can accommodate streaming function .mp3 file type. Implementation result of the system can be implemented on a local network using a wireless LAN. In addition, the results of user testing using Likert method shows the application can run and function properly


2021 ◽  
Author(s):  
◽  
Kyle Chard

<p>The computational landscape is littered with islands of disjoint resource providers including commercial Clouds, private Clouds, national Grids, institutional Grids, clusters, and data centers. These providers are independent and isolated due to a lack of communication and coordination, they are also often proprietary without standardised interfaces, protocols, or execution environments. The lack of standardisation and global transparency has the effect of binding consumers to individual providers. With the increasing ubiquity of computation providers there is an opportunity to create federated architectures that span both Grid and Cloud computing providers effectively creating a global computing infrastructure. In order to realise this vision, secure and scalable mechanisms to coordinate resource access are required. This thesis proposes a generic meta-scheduling architecture to facilitate federated resource allocation in which users can provision resources from a range of heterogeneous (service) providers. Efficient resource allocation is difficult in large scale distributed environments due to the inherent lack of centralised control. In a Grid model, local resource managers govern access to a pool of resources within a single administrative domain but have only a local view of the Grid and are unable to collaborate when allocating jobs. Meta-schedulers act at a higher level able to submit jobs to multiple resource managers, however they are most often deployed on a per-client basis and are therefore concerned with only their allocations, essentially competing against one another. In a federated environment the widespread adoption of utility computing models seen in commercial Cloud providers has re-motivated the need for economically aware meta-schedulers. Economies provide a way to represent the different goals and strategies that exist in a competitive distributed environment. The use of economic allocation principles effectively creates an open service market that provides efficient allocation and incentives for participation. The major contributions of this thesis are the architecture and prototype implementation of the DRIVE meta-scheduler. DRIVE is a Virtual Organisation (VO) based distributed economic metascheduler in which members of the VO collaboratively allocate services or resources. Providers joining the VO contribute obligation services to the VO. These contributed services are in effect membership “dues” and are used in the running of the VOs operations – for example allocation, advertising, and general management. DRIVE is independent from a particular class of provider (Service, Grid, or Cloud) or specific economic protocol. This independence enables allocation in federated environments composed of heterogeneous providers in vastly different scenarios. Protocol independence facilitates the use of arbitrary protocols based on specific requirements and infrastructural availability. For instance, within a single organisation where internal trust exists, users can achieve maximum allocation performance by choosing a simple economic protocol. In a global utility Grid no such trust exists. The same meta-scheduler architecture can be used with a secure protocol which ensures the allocation is carried out fairly in the absence of trust. DRIVE establishes contracts between participants as the result of allocation. A contract describes individual requirements and obligations of each party. A unique two stage contract negotiation protocol is used to minimise the effect of allocation latency. In addition due to the co-op nature of the architecture and the use of secure privacy preserving protocols, DRIVE can be deployed in a distributed environment without requiring large scale dedicated resources. This thesis presents several other contributions related to meta-scheduling and open service markets. To overcome the perceived performance limitations of economic systems four high utilisation strategies have been developed and evaluated. Each strategy is shown to improve occupancy, utilisation and profit using synthetic workloads based on a production Grid trace. The gRAVI service wrapping toolkit is presented to address the difficulty web enabling existing applications. The gRAVI toolkit has been extended for this thesis such that it creates economically aware (DRIVE-enabled) services that can be transparently traded in a DRIVE market without requiring developer input. The final contribution of this thesis is the definition and architecture of a Social Cloud – a dynamic Cloud computing infrastructure composed of virtualised resources contributed by members of a Social network. The Social Cloud prototype is based on DRIVE and highlights the ease in which dynamic DRIVE markets can be created and used in different domains.</p>


2022 ◽  
Vol 14 (2) ◽  
pp. 398
Author(s):  
Pieter Kempeneers ◽  
Tomas Kliment ◽  
Luca Marletta ◽  
Pierre Soille

This paper is on the optimization of computing resources to process geospatial image data in a cloud computing infrastructure. Parallelization was tested by combining two different strategies: image tiling and multi-threading. The objective here was to get insight on the optimal use of available processing resources in order to minimize the processing time. Maximum speedup was obtained when combining tiling and multi-threading techniques. Both techniques are complementary, but a trade-off also exists. Speedup is improved with tiling, as parts of the image can run in parallel. But reading part of the image introduces an overhead and increases the relative part of the program that can only run in serial. This limits speedup that can be achieved via multi-threading. The optimal strategy of tiling and multi-threading that maximizes speedup depends on the scale of the application (global or local processing area), the implementation of the algorithm (processing libraries), and on the available computing resources (amount of memory and cores). A medium-sized virtual server that has been obtained from a cloud service provider has rather limited computing resources. Tiling will not only improve speedup but can be necessary to reduce the memory footprint. However, a tiling scheme with many small tiles increases overhead and can introduce extra latency due to queued tiles that are waiting to be processed. In a high-throughput computing cluster with hundreds of physical processing cores, more tiles can be processed in parallel, and the optimal strategy will be different. A quantitative assessment of the speedup was performed in this study, based on a number of experiments for different computing environments. The potential and limitations of parallel processing by tiling and multi-threading were hereby assessed. Experiments were based on an implementation that relies on an application programming interface (API) abstracting any platform-specific details, such as those related to data access.


2018 ◽  
Vol 7 (1.7) ◽  
pp. 156
Author(s):  
S Ravikumar ◽  
E Kannan

One of the immense risk to benefit accessibility in distributed computing is Distributed Denial of Service. Here a novel approach has been proposed to limit SDO [Strewn Defiance of Overhaul] assaults. This has been wanted to accomplish by a canny quick motion horde organize. An astute horde arrange is required to guarantee independent coordination and portion of horde hubs to play out its handing-off tasks. Clever Water Drop calculation has been adjusted for appropriated and parallel advancement. The quick motion system was utilized to keep up availability between horde hubs, customers, and servers. We have intended to reproduce this as programming comprising of different customer hubs and horde hubs


Sign in / Sign up

Export Citation Format

Share Document