scholarly journals User-Centric Cyber Disaster Recovery as a Service

2019 ◽  
Vol 1 ◽  
pp. 238-246
Author(s):  
U Karim ◽  
H C Inyiama ◽  
R Karim

In a world of interdependent economies and online transactions, a large volume of data hosted on the cyberspace a daily bases. Cyber threats and attacks are steadily increasing. Most time, these threats and attacks are targeted at service providers but service users are greatly affected by the attacks due to their vulnerability level. When disasters knockdown the infrastructures of a single service provider, it will have ripple effects on thousands of innocent service users. Therefore, service users need more than ever to prepare for major crises targeted at their service providers. To cope with this trends, every service user requires an independent business continuity plan (BCP) or disaster recovery plan (DRP) and data backup policy which falls within their cost constraints while achieving the target recovery requirements in terms of recovery time objective (RTO) and recovery point objective (RPO). The aim of this paper is to develop a model for a user-centric disaster recovery system to enable service users to independently develop their data backup policies that best suits their remote databases, and host same as a cloud service deployable on public cloud for users to subscribe to and be billed on pay-as-you-go billing model. The system developed is highly compatible with MYSQL, MSSQL and Oracle databases. A combination of Dynamic System Development Methodology (DSDM) and Object- Oriented Analysis and Design Methodology (OOADM) were used to design the system while Java Enterprise Edition (JEE) is used to develop the system. The encryption and compression mechanisms of the system were tested with various sizes of backup files ranging from 64 Kb to 20Mb and several performance metrics such as (1) Encryption time; (2) Compression size; (3) CPU clock cycles and battery power are compared and analysed with some well-known encryption and compression algorithms.

2018 ◽  
Vol 7 (2.32) ◽  
pp. 100
Author(s):  
Dr K.Ravindranath ◽  
N Raghupriya ◽  
P Krishna Vamsi ◽  
D Sharath Kumar

In Today's world information been produced in huge sum, which requires data recovery assistance. The cloud service providers give security to the client  regardless  of  the  possibility  that systems are down, because of disaster. A lot of private information is produced which is put away in cloud. In this manner, the need for recovery of data services are developing in an order and needs an advancement of an well-organized powerful data rescue strategies, when  information is lost in a disaster. The motivation behind recovery strategy to support client from gathering data from any alternate server whenever that server lost information and incapable to provide information to the client. On the way to accomplish the reason, numerous diverse procedures have been proposed. In circumstances like Flood, Fire, seismic tremors or any equipment glitch or any accidental deletion of information may never again remain accessible. The target of this recovery is to condense the intense data recovery procedures that are utilized as a part of cloud computing area. It additionally describes the cloud-based disaster recovery stages and recognize open issues identified with disaster recovery. 


Author(s):  
Remigijus Gustas

This chapter presents a pragmatic-driven approach for service-oriented information system analysis and design. Its uniqueness is in exploiting a design foundation for graphical description of the semantic and pragmatic aspects of business processes that is based on the service-oriented principles. Services are viewed as dynamic subsystems. Their outputs depend not only on inputs, but on a service state as well. Intentions of business process experts are represented in terms of a set of pragmatic dependencies, which are driving the overall system engineering process. It is demonstrated how pragmatic aspects are mapped to conceptual representations, which define the semantics of business design. In contrast to the traditional system development methodologies, the main difference of the service-oriented approach is that it integrates the static and dynamic aspects into one type of diagram. Semantics of computation independent models are expressed by graphical specifications of interactions between service providers and service consumers. Semantic integrity control between static and dynamic dependencies of business processes is a one of the major benefits of service-oriented analysis and design process. It is driven by pragmatic descriptions, which are defined in terms of goals, problems and opportunities.


2022 ◽  
Author(s):  
Zhiheng Zhong ◽  
Minxian Xu ◽  
Maria Alejandra Rodriguez ◽  
Chengzhong Xu ◽  
Rajkumar Buyya

Containerization is a lightweight application virtualization technology, providing high environmental consistency, operating system distribution portability, and resource isolation. Existing mainstream cloud service providers have prevalently adopted container technologies in their distributed system infrastructures for automated application management. To handle the automation of deployment, maintenance, autoscaling, and networking of containerized applications, container orchestration is proposed as an essential research problem. However, the highly dynamic and diverse feature of cloud workloads and environments considerably raises the complexity of orchestration mechanisms. Machine learning algorithms are accordingly employed by container orchestration systems for behavior modelling and prediction of multi-dimensional performance metrics. Such insights could further improve the quality of resource provisioning decisions in response to the changing workloads under complex environments. In this paper, we present a comprehensive literature review of existing machine learning-based container orchestration approaches. Detailed taxonomies are proposed to classify the current researches by their common features. Moreover, the evolution of machine learning-based container orchestration technologies from the year 2016 to 2021 has been designed based on objectives and metrics. A comparative analysis of the reviewed techniques is conducted according to the proposed taxonomies, with emphasis on their key characteristics. Finally, various open research challenges and potential future directions are highlighted.


2014 ◽  
Vol 7 (4) ◽  
pp. 39 ◽  
Author(s):  
Mohammad Ali Khoshkholghi ◽  
Azizol Abdullah ◽  
Rohaya Latip ◽  
Shamala Subramaniam ◽  
Mohamed Othman

Disaster recovery is a persistent problem in IT platforms. This problem is more crucial in cloud computing, because Cloud Service Providers (CSPs) have to provide the services to their customers even if the data center is down, due to a disaster. In the past few years, researchers have shown interest to disaster recovery using cloud computing, and a considerable amount of literature has been published in this area. However, to the best of our knowledge, there is a lack of precise survey for detailed analysis of cloud-based disaster recovery. To fill this gap, this paper provides an extensive survey of disaster recovery concepts and research in the cloud environments. We present different taxonomy of disaster recovery mechanisms, main challenges and proposed solutions. We also describe the cloud-based disaster recovery platforms and identify open issues related to disaster recovery.


Author(s):  
K. S. Sakunthala Prabha, Et. al.

Disaster recovery is a diligent issue in IT business. This issue is progressively significant in cloud computing, since Cloud Service Providers (CSPs) are bound to provide all facilities to their clients regardless of whether the server farm is down, because of a disaster. During the disaster, the data may be lost. To overcome this problem, replication is generated for each input data. The main objective of this paper is to upload different data on optimal location of cloud. The proposed system consists of three modules, namely, replica generation; choose optimal location and recovery process. Initially, to avoid the data loss, the input data are replicated. After replication process, the data are stored on cloud with the help of oppositional gravitational search algorithm (OGSA) which then retrieves only the request based data. Hence, we could avoid the data loss due to disaster. The presentation of proposed methodology is analyzed by different metrics comparing with various methods.


Author(s):  
Jānis Kampars ◽  
Krišjānis Pinka

For customers of cloud-computing platforms it is important to minimize the infrastructure footprint and associated costs while providing required levels of Quality of Service (QoS) and Quality of Experience (QoE) dictated by the Service Level Agreement (SLA). To assist with that cloud service providers are offering: (1) horizontal resource scaling through provisioning and destruction of virtual machines and containers, (2) vertical scaling through changing the capacity of individual cloud nodes. Existing scaling solutions mostly concentrate on low-level metrics like CPU load and memory consumption which doesn’t always correlate with the level of SLA conformity. Such technical measures should be preprocessed and viewed from a higher level of abstraction. Application level metrics should also be considered when deciding upon scaling the cloud-based solution. Existing scaling platforms are mostly proprietary technologies owned by cloud service providers themselves or by third parties and offered as Software as a Service. Enterprise applications could span infrastructures of multiple public and private clouds, dictating that the auto-scaling solution should not be isolated inside a single cloud infrastructure. The goal of this paper is to address the challenges above by presenting the architecture of Auto-scaling and Adjustment Platform for Cloud-based Systems (ASAPCS). It is based on open-source technologies and supports integration of various low and high level performance metrics, providing higher levels of abstraction for design of scaling algorithms. ASAPCS can be used with any cloud service provider and guarantees that move from one cloud platform to another will not result in complete redesign of the scaling algorithm. ASAPCS itself is horizontally scalable and can process large amounts of real-time data which is particularly important for applications developed following the microservices architectural style. ASAPCS approaches the scaling problem in a nonstandard way by considering real-time adjustments of the application logic to be part of the scalability strategy if it can result in performance improvements.


Author(s):  
Talent Mhangwa ◽  
Madhu Kasiram ◽  
Sibonsile Zibane

The number of female drug users has been on the rise in South Africa, with statistics reflecting a rise in the number of women who attend treatment centres annually. This article presents empirical data from a broader qualitative study which aimed to explore perceptions concerning the effectiveness of aftercare programmes for female recovering drug users. The main data source was transcripts of in-depth interviews and focus groups with both service users and service providers from a designated rehabilitation centre in Gauteng, South Africa. Framed within a biopsychosocial-spiritual model, this article explores the perceptions and meanings which the female recovering drug users and the service providers attach to aftercare programmes. The findings of the research outlined the range of factors promoting recovery, alongside noteworthy suggestions for improvement in aftercare services. While acknowledging multiple influences on behaviour, this article highlights the significance of these findings in planning and implementing holistic aftercare programmes.


Author(s):  
Jin Han ◽  
Jing Zhan ◽  
Xiaoqing Xia ◽  
Xue Fan

Background: Currently, Cloud Service Provider (CSP) or third party usually proposes principles and methods for cloud security risk evaluation, while cloud users have no choice but accept them. However, since cloud users and cloud service providers have conflicts of interests, cloud users may not trust the results of security evaluation performed by the CSP. Also, different cloud users may have different security risk preferences, which makes it difficult for third party to consider all users' needs during evaluation. In addition, current security evaluation indexes for cloud are too impractical to test (e.g., indexes like interoperability, transparency, portability are not easy to be evaluated). Methods: To solve the above problems, this paper proposes a practical cloud security risk evaluation method of decision-making based on conflicting roles by using the Analytic Hierarchy Process (AHP) with Aggregation of Individual priorities (AIP). Results: Not only can our method bring forward a new index system based on risk source for cloud security and corresponding practical testing methods, but also can obtain the evaluation result with the risk preferences of conflicting roles, namely CSP and cloud users, which can lay a foundation for improving mutual trusts between the CSP and cloud users. The experiments show that the method can effectively assess the security risk of cloud platforms and in the case where the number of clouds increased by 100% and 200%, the evaluation time using our methodology increased by only by 12% and 30%. Conclusion: Our method can achieve consistent decision based on conflicting roles, high scalability and practicability for cloud security risk evaluation.


2020 ◽  
Vol 63 (8) ◽  
pp. 1216-1230 ◽  
Author(s):  
Wei Guo ◽  
Sujuan Qin ◽  
Jun Lu ◽  
Fei Gao ◽  
Zhengping Jin ◽  
...  

Abstract For a high level of data availability and reliability, a common strategy for cloud service providers is to rely on replication, i.e. storing several replicas onto different servers. To provide cloud users with a strong guarantee that all replicas required by them are actually stored, many multi-replica integrity auditing schemes were proposed. However, most existing solutions are not resource economical since users need to create and upload replicas of their files by themselves. A multi-replica solution called Mirror is presented to overcome the problems, but we find that it is vulnerable to storage saving attack, by which a dishonest provider can considerably save storage costs compared to the costs of storing all the replicas honestly—while still can pass any challenge successfully. In addition, we also find that Mirror is easily subject to substitution attack and forgery attack, which pose new security risks for cloud users. To address the problems, we propose some simple yet effective countermeasures and an improved proofs of retrievability and replication scheme, which can resist the aforesaid attacks and maintain the advantages of Mirror, such as economical bandwidth and efficient verification. Experimental results show that our scheme exhibits comparable performance with Mirror while achieving high security.


Sign in / Sign up

Export Citation Format

Share Document