container orchestration
Recently Published Documents


TOTAL DOCUMENTS

94
(FIVE YEARS 76)

H-INDEX

7
(FIVE YEARS 4)

2022 ◽  
Author(s):  
Zhiheng Zhong ◽  
Minxian Xu ◽  
Maria Alejandra Rodriguez ◽  
Chengzhong Xu ◽  
Rajkumar Buyya

Containerization is a lightweight application virtualization technology, providing high environmental consistency, operating system distribution portability, and resource isolation. Existing mainstream cloud service providers have prevalently adopted container technologies in their distributed system infrastructures for automated application management. To handle the automation of deployment, maintenance, autoscaling, and networking of containerized applications, container orchestration is proposed as an essential research problem. However, the highly dynamic and diverse feature of cloud workloads and environments considerably raises the complexity of orchestration mechanisms. Machine learning algorithms are accordingly employed by container orchestration systems for behavior modelling and prediction of multi-dimensional performance metrics. Such insights could further improve the quality of resource provisioning decisions in response to the changing workloads under complex environments. In this paper, we present a comprehensive literature review of existing machine learning-based container orchestration approaches. Detailed taxonomies are proposed to classify the current researches by their common features. Moreover, the evolution of machine learning-based container orchestration technologies from the year 2016 to 2021 has been designed based on objectives and metrics. A comparative analysis of the reviewed techniques is conducted according to the proposed taxonomies, with emphasis on their key characteristics. Finally, various open research challenges and potential future directions are highlighted.


2021 ◽  
Vol 12 (1) ◽  
pp. 140
Author(s):  
Seunghwan Lee ◽  
Linh-An Phan ◽  
Dae-Heon Park ◽  
Sehan Kim ◽  
Taehong Kim

With the exponential growth of the Internet of Things (IoT), edge computing is in the limelight for its ability to quickly and efficiently process numerous data generated by IoT devices. EdgeX Foundry is a representative open-source-based IoT gateway platform, providing various IoT protocol services and interoperability between them. However, due to the absence of container orchestration technology, such as automated deployment and dynamic resource management for application services, EdgeX Foundry has fundamental limitations of a potential edge computing platform. In this paper, we propose EdgeX over Kubernetes, which enables remote service deployment and autoscaling to application services by running EdgeX Foundry over Kubernetes, which is a product-grade container orchestration tool. Experimental evaluation results prove that the proposed platform increases manageability through the remote deployment of application services and improves the throughput of the system and service quality with real-time monitoring and autoscaling.


Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3085
Author(s):  
János Harmatos ◽  
Markosz Maliosz

Digitalization and networking are taking on an increasingly important role in manufacturing. Fifth Generation mobile networks (5G) allow us to wirelessly connect multiple assets in factories with guaranteed quality of service (QoS). A 5G non-public network (5G-NPN) realizes a dedicated network with secure communication within the factory. Time-sensitive networking (TSN) provides deterministic connectivity and reliability in local networks. Edge computing moves computing power near factory locations, reducing the latency of edge applications. Making production processes more flexible, more robust, and resilient induces a great challenge for integrating these technologies. This paper presents the benefits of the joint use of 5G-NPN, TSN, and edge computing in manufacturing. To that end, first, the characteristics of the technologies are analyzed. Then, the integration of different 5G-NPN deployment options with edge (and cloud) computing is presented to provide end-to-end services. For enhanced reliability, ways of interworking between TSN and edge computing domains are proposed. Afterward, as an example realization of edge computing, the investigation on the capabilities of the Kubernetes container orchestration platform is presented together with the gap analysis for smart manufacturing requirements. Finally, the different integration options, interworking models, and Kubernetes-based edge computing are evaluated to assist smart factories to use these new technologies in combination in the future.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Chunmao Jiang ◽  
Peng Wu

The container scaling mechanism, or elastic scaling, means the cluster can be dynamically adjusted based on the workload. As a typical container orchestration tool in cloud computing, Horizontal Pod Autoscaler (HPA) automatically adjusts the number of pods in a replication controller, deployment, replication set, or stateful set based on observed CPU utilization. There are several concerns with the current HPA technology. The first concern is that it can easily lead to untimely scaling and insufficient scaling for burst traffic. The second is that the antijitter mechanism of HPA may cause an inadequate number of onetime scale-outs and, thus, the inability to satisfy subsequent service requests. The third concern is that the fixed data sampling time means that the time interval for data reporting is the same for average and high loads, leading to untimely and insufficient scaling at high load times. In this study, we propose a Double Threshold Horizontal Pod Autoscaler (DHPA) algorithm, which fine-grained divides the scale of events into three categories: scale-out, no scale, and scale-in. And then, on the scaling strength, we also employ two thresholds that are further subdivided into no scaling (antijitter), regular scaling, and fast scaling for each of the three cases. The DHPA algorithm determines the scaling strategy using the average of the growth rates of CPU utilization, and thus, different scheduling policies are adopted. We compare the DHPA with the HPA algorithm under different loads, including low, medium, and high. The experiments show that the DHPA algorithm has better antijitter and antiload characteristics in container increase and reduction while ensuring service and cluster security.


2021 ◽  
Author(s):  
Vaclav Struhar ◽  
Silviu S. Craciunas ◽  
Mohammad Ashjaei ◽  
Moris Behnam ◽  
Alessandro V. Papadopoulos

Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5797
Author(s):  
Briytone Mutichiro ◽  
Minh-Ngoc Tran ◽  
Young-Han Kim

In edge computing, scheduling heterogeneous workloads with diverse resource requirements is challenging. Besides limited resources, the servers may be overwhelmed with computational tasks, resulting in lengthy task queues and congestion occasioned by unusual network traffic patterns. Additionally, Internet of Things (IoT)/Edge applications have different characteristics coupled with performance requirements, which become determinants if most edge applications can both satisfy deadlines and each user’s QoS requirements. This study aims to address these restrictions by proposing a mechanism that improves the cluster resource utilization and Quality of Service (QoS) in an edge cloud cluster in terms of service time. Containerization can provide a way to improve the performance of the IoT-Edge cloud by factoring in task dependencies and heterogeneous application resource demands. In this paper, we propose STaSA, a service time aware scheduler for the edge environment. The algorithm automatically assigns requests onto different processing nodes and then schedules their execution under real-time constraints, thus minimizing the number of QoS violations. The effectiveness of our scheduling model is demonstrated through implementation on KubeEdge, a container orchestration platform based on Kubernetes. Experimental results show significantly fewer violations in QoS during scheduling and improved performance compared to the state of the art.


Author(s):  
Bharti Sharma ◽  
Poonam Bansal ◽  
Mohak Chugh ◽  
Adisakshya Chauhan ◽  
Prateek Anand ◽  
...  

AbstractKubernetes is an open-source container orchestration system for automating container application operations and has been considered to deploy various kinds of container workloads. Traditional geo-databases face frequent scalability issues while dealing with dense and complex spatial data. Despite plenty of research work in the comparison of relational and NoSQL databases in handling geospatial data, there is a shortage of existing knowledge about the performance of geo-database in a clustered environment like Kubernetes. This paper presents benchmarking of PostgreSQL/PostGIS geospatial databases operating on a clustered environment against non-clustered environments. The benchmarking process considers the average execution times of geospatial structured query language (SQL) queries on multiple hardware configurations to compare the environments based on handling computationally expensive queries involving SQL operations and PostGIS functions. The geospatial queries operate on data imported from OpenStreetMap into PostgreSQL/PostGIS. The clustered environment powered by Kubernetes demonstrated promising improvements in the average execution times of computationally expensive geospatial SQL queries on all considered hardware configurations compared to their average execution times in non-clustered environments.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 914
Author(s):  
Adi Farshteindiker ◽  
Rami Puzis

With the advent of microservice-based software architectures, an increasing number of modern cloud environments and enterprises use operating system level virtualization, which is often referred to as container infrastructures. Docker Swarm is one of the most popular container orchestration infrastructures, providing high availability and fault tolerance. Occasionally, discovered container escape vulnerabilities allow adversaries to execute code on the host operating system and operate within the cloud infrastructure. We show that Docker Swarm is currently not secured against misbehaving manager nodes. This allows a high impact, high probability privilege escalation attack, which we refer to as leadership hijacking, the possibility of which is neglected by the current cloud security literature. Cloud lateral movement and defense evasion payloads allow an adversary to leverage the Docker Swarm functionality to control each and every host in the underlying cluster. We demonstrate an end-to-end attack, in which an adversary with access to an application running on the cluster achieves full control of the cluster. To reduce the probability of a successful high impact attack, container orchestration infrastructures must reduce the trust level of participating nodes and, in particular, incorporate adversary immune leader election algorithms.


Author(s):  
Pranava Bhat

The architectural style of developing a software application using loosely coupled and highly cohesive services can be termed as microservices architecture. The microservices allow agile software development and enable businesses to build and deliver applications quickly. To achieve the benefits of microservices, an underlying infrastructure that supports them must exist. This includes CI/CD pipelines, execution environments like virtual machines and containers, logging and monitoring, communication mechanisms, and so on. Containers are lightweight, enable multiple execution environments to exist on a single operating system instance, and provide isolation. Container Orchestration Engines such as Docker swarm or Kubernetes automate deployment, scaling, fault tolerance, and container networking. Many organizations use containers to spawn resources in public or private clouds. Different engineering teams perform various kinds of tests by bundling the test code and dependencies into containers. However, cleaning up these containers is necessary for the efficient utilization of hardware resources. This paper discusses the need and benefits of a centralized cleanup service for Kubernetes and cloud resources. It analyzes the value additions this service can bring to the software development process of large organizations.


Sign in / Sign up

Export Citation Format

Share Document