machine image
Recently Published Documents


TOTAL DOCUMENTS

66
(FIVE YEARS 25)

H-INDEX

7
(FIVE YEARS 2)

2022 ◽  
Vol 16 (1) ◽  
pp. 0-0

Virtual Machine Image (VMI) is the building block of cloud infrastructure. It encapsulates the various applications and data deployed at the Cloud Service Provider (CSP) end. With the leading advances of cloud computing, comes the added concern of its security. Securing the Cloud infrastructure as a whole is based on the security of the underlying Virtual Machine Images (VMI). In this paper an attempt has been made to highlight the various risks faced by the CSP and Cloud Service Consumer (CSC) in the context of VMI related operations. Later, in this article a formal model of the cloud infrastructure has been proposed. Finally, the Ethereum blockchain has been incorporated to secure, track and manage all the vital operations of the VMIs. The immutable and decentralized nature of blockchain not only makes the proposed scheme more reliable but guarantees auditability of the system by maintaining the entire VMI history in the blockchain.


2021 ◽  
Vol 17 (3) ◽  
pp. 1-24
Author(s):  
Jiachen Zhang ◽  
Lixiao Cui ◽  
Peng Li ◽  
Xiaoguang Liu ◽  
Gang Wang

Persistent memory’s (PM) byte-addressability and high capacity will also make it emerging for virtualized environment. Modern virtual machine monitors virtualize PM using either I/O virtualization or memory virtualization. However, I/O virtualization will sacrifice PM’s byte-addressability, and memory virtualization does not get the chance of PM image management. In this article, we enhance QEMU’s memory virtualization mechanism. The enhanced system can achieve both PM’s byte-addressability inside virtual machines and PM image management outside the virtual machines. We also design pcow , a virtual machine image format for PM, which is compatible with our enhanced memory virtualization and supports storage virtualization features including thin-provisioning, base image, snapshot, and striping. Address translation is performed with the help of the Extended Page Table, thus much faster than image formats implemented in I/O virtualization. We also optimize pcow considering PM’s characteristics. We perform exhaustive performance evaluations on an x86 server equipping with Intel’s Optane DC persistent memory. The evaluation demonstrates that our scheme boosts the overall performance by up to 50× compared with qcow2, an image format implemented in I/O virtualization, and brings almost no performance overhead compared with the native memory virtualization. The striping feature can also scale-out the virtual PM’s bandwidth performance.


Author(s):  
Gangadhara Rao Kommu

TeraSort is one of Hadoop’s widely used benchmarks. Hadoop’s distribution contains both the input generator and sorting implementations: the TeraGen generates the input and TeraSort conducts the sorting. We focus on the comparison of TeraSort algorithm on the different distributed platforms with different configurations of the resources. We have considered the parameters of measure of efficiency as Compute Time, Data Read, Data Write, Compute Time, and Speedup. We have conducted experiments using Hadoop map reduce and Spark (Java). We empirically evaluate the performance of TeraSort algorithm on Amazon EC2 Machine Images, and demonstrate that it achieves 3.95 × - 2.4 × speedup, compared with TeraSort, for typical settings of interest.


2021 ◽  
Author(s):  
Shishir Reddy ◽  
Ling-Hong Hung ◽  
Olga Sala-Torra ◽  
Jerald Radich ◽  
Cecilia CS Yeung ◽  
...  

We present a graphical cloud-enabled workflow for fast, interactive analysis of nanopore sequencing data using GPUs. Users customize parameters, monitor execution and visualize results through an accessible graphical interface. To facilitate reproducible deployment, we use Docker containers and provide an Amazon Machine Image (AMI) with all software and drivers pre-installed for GPU computing on the cloud. We observe a 34x speedup and a 109x reduction in costs for the rate-limiting basecalling step in the analysis of blood cancer cell line data. The graphical interface and greatly simplified deployment facilitate the adoption of GPUs for rapid, cost-effective analysis of long-read sequencing.


Telecom IT ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 47-58
Author(s):  
S. Shterenberg ◽  
A. Moskalchuk ◽  
A. Krasov

The article demonstrates the concept of building a laboratory for penetration testing using a special pro-gram. The program is a set of scripts that configure the system in accordance with a user-defined script. Thanks to the elements of script randomization, this solution allows you to deploy several educational tasks at once to a group of students using only one virtual machine image. The basic idea is that the set-up and creation of a vulnerable target occurs just before the execution of the learning task itself. Those, the virtual machine is initially a basic Ubuntu Linux image that does not have any set of vulnerabilities. The main feature of the proposed solution is that the content of the scripts describes not one variant of the system configuration, but several at once, forming scripts with elements of randomization. In other words, having a basic Ubuntu Linux image and a set of the scripts, you can create different tasks for a dozen students.


Sign in / Sign up

Export Citation Format

Share Document