A Secure Architecture for Data Storage in the Cloud Environments

Author(s):  
Chuan Fu ◽  
Jun Yang ◽  
Zheli Liu ◽  
Chunfu Jia
2018 ◽  
Vol 50 (6) ◽  
pp. 1-51 ◽  
Author(s):  
Yaser Mansouri ◽  
Adel Nadjaran Toosi ◽  
Rajkumar Buyya

2018 ◽  
Vol 11 (2) ◽  
pp. 30-42
Author(s):  
Vinicius Da Silveira Segalin ◽  
Carina Friedrich Dorneles ◽  
Mario Antonio Ribeiro Dantas

Cloud computing is a paradigm that presents many advantages to both costumers and service providers, such as low upfront investment, pay-per-use and easiness of use, delivering/enabling scalable services using Internet technologies. Among many types of services we have today, Database as a Service (DBaaS) is the one where a database is provided in the cloud in all its aspects. Examples of aspects related to DBaaS utilization are data storage, resources management and SLA maintenance. In this context, an important feature, related to it, is resource management and performance, which can be done in many different ways for several reasons, such as saving money, time, and meeting the requirements agreed between client and provider, that are defined in the Service Level Agreement (SLA). A SLA usually tries to protect the costumer from not receiving the contracted service and to ensure that the provider reaches the profit intended. In this paper it is presented a classification based on three main parameters that aim to manage resources for enhancing the performance on DBaaS and guarantee that the SLA is respected for both user and provider sides benefit. The proposal is based upon a survey of existing research work efforts.


2021 ◽  
Vol 11 (13) ◽  
pp. 6200
Author(s):  
Jin-young Choi ◽  
Minkyoung Cho ◽  
Jik-Soo Kim

Recently, “Big Data” platform technologies have become crucial for distributed processing of diverse unstructured or semi-structured data as the amount of data generated increases rapidly. In order to effectively manage these Big Data, Cloud Computing has been playing an important role by providing scalable data storage and computing resources for competitive and economical Big Data processing. Accordingly, server virtualization technologies that are the cornerstone of Cloud Computing have attracted a lot of research interests. However, conventional hypervisor-based virtualization can cause performance degradation problems due to its heavily loaded guest operating systems and rigid resource allocations. On the other hand, container-based virtualization technology can provide the same level of service faster with a lightweight capacity by effectively eliminating the guest OS layers. In addition, container-based virtualization enables efficient cloud resource management by dynamically adjusting the allocated computing resources (e.g., CPU and memory) during the runtime through “Vertical Elasticity”. In this paper, we present our practice and experience of employing an adaptive resource utilization scheme for Big Data workloads in container-based cloud environments by leveraging the vertical elasticity of Docker, a representative container-based virtualization technique. We perform extensive experiments running several Big Data workloads on representative Big Data platforms: Apache Hadoop and Spark. During the workload executions, our adaptive resource utilization scheme periodically monitors the resource usage patterns of running containers and dynamically adjusts allocated computing resources that could result in substantial improvements in the overall system throughput.


Author(s):  
R. Rajan ◽  
◽  
Dr. C. Sunitha Ram ◽  

Cloud computing the technology which have the capability of modifying the method computing strongly, and storage resources will be accessed shortly. User Identification is an entity to detect the user who using the system or website. In information technology the protection of information consistently become a major issue to handle. The data might place in various locations in the world since it become particularly serious. The two main factors regarding cloud technology are information protection and security. The cloud operators can easily reach the sensitive information that affects the data security and protection measures. Therefore, this research protocol mainly focuses on secure data storage that always been a significant feature of quality of service. To guarantee the ‘rightness of users’ information in cloud storage system a Protection Aware User Identity and Data Storage (PAUIDS) algorithm is proposed that separates the document and independently stores the user information in the cloud storage servers. The proposed algorithm reduces the encryption and decryption time in a cloud storage system and providing secure and efficient data storage in cloud environments.


Author(s):  
Renata Jordao ◽  
Valerio Aymore Martins ◽  
Fabio Buiati ◽  
Rafael Timoteo de Sousa ◽  
Flavio Elias de Deus

Author(s):  
Richard S. Chemock

One of the most common tasks in a typical analysis lab is the recording of images. Many analytical techniques (TEM, SEM, and metallography for example) produce images as their primary output. Until recently, the most common method of recording images was by using film. Current PS/2R systems offer very large capacity data storage devices and high resolution displays, making it practical to work with analytical images on PS/2s, thereby sidestepping the traditional film and darkroom steps. This change in operational mode offers many benefits: cost savings, throughput, archiving and searching capabilities as well as direct incorporation of the image data into reports.The conventional way to record images involves film, either sheet film (with its associated wet chemistry) for TEM or PolaroidR film for SEM and light microscopy. Although film is inconvenient, it does have the highest quality of all available image recording techniques. The fine grained film used for TEM has a resolution that would exceed a 4096x4096x16 bit digital image.


Author(s):  
T. A. Dodson ◽  
E. Völkl ◽  
L. F. Allard ◽  
T. A. Nolan

The process of moving to a fully digital microscopy laboratory requires changes in instrumentation, computing hardware, computing software, data storage systems, and data networks, as well as in the operating procedures of each facility. Moving from analog to digital systems in the microscopy laboratory is similar to the instrumentation projects being undertaken in many scientific labs. A central problem of any of these projects is to create the best combination of hardware and software to effectively control the parameters of data collection and then to actually acquire data from the instrument. This problem is particularly acute for the microscopist who wishes to "digitize" the operation of a transmission or scanning electron microscope. Although the basic physics of each type of instrument and the type of data (images & spectra) generated by each are very similar, each manufacturer approaches automation differently. The communications interfaces vary as well as the command language used to control the instrument.


Sign in / Sign up

Export Citation Format

Share Document