The Impact of DDoS Attacks on Application Containers, System Containers, and Virtual Machines

Author(s):  
Austin White ◽  
Patrick O’Boyle ◽  
Sierra Wyllie ◽  
Micheal Galloway
2017 ◽  
Author(s):  
◽  
Roshan Lal Neupane

Cloud-hosted services are being increasingly used in online businesses in e.g., retail, healthcare, manufacturing, entertainment due to benefits such as scalability and reliability. These benefits are fueled by innovations in orchestration of cloud platforms that make them totally programmable as Software Defined everything Infrastructures (SDxI). At the same time, sophisticated targeted attacks such as Distributed Denial-of-Service (DDoS) are growing on an unprecedented scale threatening the availability of online businesses. In this thesis, we present a novel defense system called Dolus to mitigate the impact of DDoS attacks launched against high-value services hosted in SDxI-based cloud platforms. Our Dolus system is able to initiate a pretense in a scalable and collaborative manner to deter the attacker based on threat intelligence obtained from attack feature analysis in a two-stage ensemble learning scheme. Using foundations from pretense theory in child play, Dolus takes advantage of elastic capacity provisioning via quarantine virtual machines and SDxI policy co-ordination across multiple network domains. To maintain the pretense of false sense of success after attack identification, Dolus uses two strategies: (i) dummy traffic pressure in a quarantine to mimic target response time profiles that were present before legitimate users were migrated away, and (ii) Scapy-based packet manipulation to generate responses with spoofed IP addresses of the original target before the attack traffic started being quarantined. From the time gained through pretense initiation, Dolus enables cloud service providers to decide on a variety of policies to mitigate the attack impact, without disrupting the cloud services experience for legitimate users. We evaluate the efficacy of Dolus using a GENI Cloud testbed and demonstrate its real-time capabilities to: (a) detect DDoS attacks and redirect attack traffic to quarantine resources to engage the attacker under pretense, and (b) coordinate SDxI policies to possibly block DDoS attacks closer to the attack source(s).


2021 ◽  
Vol 11 (11) ◽  
pp. 5213
Author(s):  
Chin-Shiuh Shieh ◽  
Wan-Wei Lin ◽  
Thanh-Tuan Nguyen ◽  
Chi-Hong Chen ◽  
Mong-Fong Horng ◽  
...  

DDoS (Distributed Denial of Service) attacks have become a pressing threat to the security and integrity of computer networks and information systems, which are indispensable infrastructures of modern times. The detection of DDoS attacks is a challenging issue before any mitigation measures can be taken. ML/DL (Machine Learning/Deep Learning) has been applied to the detection of DDoS attacks with satisfactory achievement. However, full-scale success is still beyond reach due to an inherent problem with ML/DL-based systems—the so-called Open Set Recognition (OSR) problem. This is a problem where an ML/DL-based system fails to deal with new instances not drawn from the distribution model of the training data. This problem is particularly profound in detecting DDoS attacks since DDoS attacks’ technology keeps evolving and has changing traffic characteristics. This study investigates the impact of the OSR problem on the detection of DDoS attacks. In response to this problem, we propose a new DDoS detection framework featuring Bi-Directional Long Short-Term Memory (BI-LSTM), a Gaussian Mixture Model (GMM), and incremental learning. Unknown traffic captured by the GMM are subject to discrimination and labeling by traffic engineers, and then fed back to the framework as additional training samples. Using the data sets CIC-IDS2017 and CIC-DDoS2019 for training, testing, and evaluation, experiment results show that the proposed BI-LSTM-GMM can achieve recall, precision, and accuracy up to 94%. Experiments reveal that the proposed framework can be a promising solution to the detection of unknown DDoS attacks.


Author(s):  
Pritam Patange

Abstract: Cloud computing has experienced significant growth in the recent years owing to the various advantages it provides such as 24/7 availability, quick provisioning of resources, easy scalability to name a few. Virtualization is the backbone of cloud computing. Virtual Machines (VMs) are created and executed by a software called Virtual Machine Monitor (VMM) or the hypervisor. It separates compute environments from the actual physical infrastructure. A disk image file representing a single virtual machine is created on the hypervisor’s file system. In this paper, we analysed the runtime performance of multiple different disk image file formats. The analysis comprises of four different parameters of performance namely- bandwidth, latency, input-output operations performed per second (IOPS) and power consumption. The impact of the hypervisor’s block and file sizes is also analysed for the different file formats. The paper aims to act as a reference for the reader in choosing the most appropriate disk file image format for their use case based on the performance comparisons made between different disk image file formats on two different hypervisors – KVM and VirtualBox. Keywords: Virtualization, Virtual disk formats, Cloud computing, fio, KVM, virt-manager, powerstat, VirtualBox.


Author(s):  
Shruthi P. ◽  
Nagaraj G. Cholli

Cloud Computing is the environment in which several virtual machines (VM) run concurrently on physical machines. The cloud computing infrastructure hosts multiple cloud service segments that communicate with each other using the interfaces. This creates distributed computing environment. During operation, the software systems accumulate errors or garbage that leads to system failure and other hazardous consequences. This status is called software aging. Software aging happens because of memory fragmentation, resource consumption in large scale and accumulation of numerical error. Software aging degrads the performance that may result in system failure. This happens because of premature resource exhaustion. This issue cannot be determined during software testing phase because of the dynamic nature of operation. The errors that cause software aging are of special types. These errors do not disturb the software functionality but target the response time and its environment. This issue is to be resolved only during run time as it occurs because of the dynamic nature of the problem. To alleviate the impact of software aging, software rejuvenation technique is being used. Rejuvenation process reboots the system or re-initiates the softwares. This avoids faults or failure. Software rejuvenation removes accumulated error conditions, frees up deadlocks and defragments operating system resources like memory. Hence, it avoids future failures of system that may happen due to software aging. As service availability is crucial, software rejuvenation is to be carried out at defined schedules without disrupting the service. The presence of Software rejuvenation techniques can make software systems more trustworthy. Software designers are using this concept to improve the quality and reliability of the software. Software aging and rejuvenation has generated a lot of research interest in recent years. This work reviews some of the research works related to detection of software aging and identifies research gaps.


TEM Journal ◽  
2020 ◽  
pp. 899-906

One of the most notorious security issues in the IoT is the Distributed Denial of Service (DDoS) attack. Using a large number of agents, DDoS attack floods the host server with a huge number of requests causing interrupting and blocking the legitimate user requests. This paper proposes a detection and prevention algorithm for DDoS attacks. It is divided into two parts, one for detecting the DDoS attack in the IoT end devices and the other for mitigating the impact of the attack placed on the border router. Also, it has the ability to differentiate the High-rate from the Lowrate DDoS attack accurately and defend against these two types of attacks. It is implemented and tested against different scenarios to dissect their efficiency in detecting and mitigating the DDoS attack.


Internet of Things (IoT) and Internet of Mobile Things (IoMT) acquired widespread popularity by its ease of deployment and support for innovative applications. The sensed and aggregated data from IoT and IoMT are transferred to Cloud through Internet for analysis, interpretation and decision making. In order to generate timely response and sending back the decisions to the end users or Administrators, it is important to select appropriate cloud data centers which would process and produce responses in a shorter time. Beside several factors that determine the performance of the integrated 6LOWPAN and Cloud Data Centers, we analyze the available bandwidth between various user bases (IoT and IoMT networks) and the cloud data centers. Amidst of various services offered in cloud, problems such as congestion, delay and poor response time arises when the number of user request increases. Load balancing/sharing algorithms are the popularly used techniques to improve the performance of the cloud system. Load refers to the number of user requests (Data) from different types of networks such as IoT and IoMT which are IPv6 compliant. In this paper we investigate the impact of homogeneous and heterogeneous bandwidth between different regions in load balancing algorithms for mapping user requests (Data) to various virtual machines in Cloud. We investigate the influence of bandwidth across different regions in determining the response time for the corresponding data collected from data harvesting networks. We simulated the cloud environment with various bandwidth values between user base and data centers and presented the average response time for individual user bases. We used Cloud- Analyst an open source tool to simulate the proposed work. The obtained results can be used as a reference to map the mass data generated by various networks to appropriate data centers to produce the response in an optimal time.


Author(s):  
Ouidad Achahbar ◽  
Mohamed Riduan Abid

The ongoing pervasiveness of Internet access is intensively increasing Big Data production. This, in turn, increases demand on compute power to process this massive data, and thus rendering High Performance Computing (HPC) into a high solicited service. Based on the paradigm of providing computing as a utility, the Cloud is offering user-friendly infrastructures for processing Big Data, e.g., High Performance Computing as a Service (HPCaaS). Still, HPCaaS performance is tightly coupled with the underlying virtualization technique since the latter is responsible for the creation of virtual machines that carry out data processing jobs. In this paper, the authors evaluate the impact of virtualization on HPCaaS. They track HPC performance under different Cloud virtualization platforms, namely KVM and VMware-ESXi, and compare it against physical clusters. Each tested cluster provided different performance trends. Yet, the overall analysis of the findings proved that the selection of virtualization technology can lead to significant improvements when handling HPCaaS.


2020 ◽  
Vol 26 (1) ◽  
Author(s):  
Oluwatosin O.P. Ogunbodede ◽  
Yinka S.O Adelanwa ◽  
Oyinkansola Adewumi

Cloud computing solution is now being adopted for reasons of economic and technical benefits by organizations to deliver array of services over the internet. Service provisioning has increased, so has the subscribers that access these services, as well as increased complexity in cloud infrastructures. However, cloud benefits to providers and clients may erode if there are no adequate cloud managing/monitoring frameworks built on the peculiar characteristics of the mission-critical environment. Plethora of literatures have been written on cloud architecture, implementation, services, virtual machines, energy efficiency, security, privacy, data loss etc. but not much on how cloud has changed our perception of network management with respect to five functional areas: Fault Configuration, Application, Performance and Security (FCAPS) of International Standard Organization (ISO). Noting that not much work has been done in this direction, this paper presents detailed analysis of the cloud, cloud network management with emphasis to the ISO functional areas. It uses key cloud-centric issues of concern to discuss and critically analyse the impact of cloud computing to network management and proffer significant ways forward within the possible capabilities of existing technology.Keywords Cloud-based virtual network (CVN), Cloud computing, Data Centre, Fault Configuration, Application, Performance and Security (FCAPS), Virtual Machine (VM)Vol. 26, No. 1, June 2019


Author(s):  
Zinchenko Olha ◽  

In conditions of high business competition, IT organizations need to respond quickly to the needs of their users who need resources to support business applications. This is due to the rapid spread of the cloud computing model, in which resources can be deployed independently and on demand. Cloud computing is a source of tools to automate the deployment of resources, so IT organizations do not have to spend so much time doing this process manually. When deploying new applications, moving virtual servers or commissioning new instances due to dynamic applications, the network must respond quickly and provide the required type of connection. There has been a significant breakthrough in software configured networks (SDN / NFV) over the past few years. SDN / NFV organizations need to increase network adaptability by automating the network on cloud computing platforms.However, the new challenges posed by the combination of cloud computing and SDN / NFV, especially in the area of enterprise network security, are still poorly understood. This article is about solving this important problem. The article examines the impact on the mechanisms of protection against network attacks in the corporate network, which uses both technologies, simulates DDoS-attacks on cloud computing systems. It has been shown that SDN / NFV technology can really help protect against DDoS attacks if the security architecture is designed correctly.


Sign in / Sign up

Export Citation Format

Share Document