scholarly journals Empirical Performance and Energy Consumption Evaluation of Container Solutions on Resource Constrained IoT Gateways

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1378
Author(s):  
Syed M. Raza ◽  
Jaeyeop Jeong ◽  
Moonseong Kim ◽  
Byungseok Kang ◽  
Hyunseung Choo

Containers virtually package a piece of software and share the host Operating System (OS) upon deployment. This makes them notably light weight and suitable for dynamic service deployment at the network edge and Internet of Things (IoT) devices for reduced latency and energy consumption. Data collection, computation, and now intelligence is included in variety of IoT devices which have very tight latency and energy consumption conditions. Recent studies satisfy latency condition through containerized services deployment on IoT devices and gateways. They fail to account for the limited energy and computing resources of these devices which limit the scalability and concurrent services deployment. This paper aims to establish guidelines and identify critical factors for containerized services deployment on resource constrained IoT devices. For this purpose, two container orchestration tools (i.e., Docker Swarm and Kubernetes) are tested and compared on a baseline IoT gateways testbed. Experiments use Deep Learning driven data analytics and Intrusion Detection System services, and evaluate the time it takes to prepare and deploy a container (creation time), Central Processing Unit (CPU) utilization for concurrent containers deployment, memory usage under different traffic loads, and energy consumption. The results indicate that container creation time and memory usage are decisive factors for containerized micro service architecture.

Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 534 ◽  
Author(s):  
Yuan He ◽  
Shunyi Zheng ◽  
Fengbo Zhu ◽  
Xia Huang

The truncated signed distance field (TSDF) has been applied as a fast, accurate, and flexible geometric fusion method in 3D reconstruction of industrial products based on a hand-held laser line scanner. However, this method has some problems for the surface reconstruction of thin products. The surface mesh will collapse to the interior of the model, resulting in some topological errors, such as overlap, intersections, or gaps. Meanwhile, the existing TSDF method ensures real-time performance through significant graphics processing unit (GPU) memory usage, which limits the scale of reconstruction scene. In this work, we propose three improvements to the existing TSDF methods, including: (i) a thin surface attribution judgment method in real-time processing that solves the problem of interference between the opposite sides of the thin surface; we distinguish measurements originating from different parts of a thin surface by the angle between the surface normal and the observation line of sight; (ii) a post-processing method to automatically detect and repair the topological errors in some areas where misjudgment of thin-surface attribution may occur; (iii) a framework that integrates the central processing unit (CPU) and GPU resources to implement our 3D reconstruction approach, which ensures real-time performance and reduces GPU memory usage. The proposed results show that this method can provide more accurate 3D reconstruction of a thin surface, which is similar to the state-of-the-art laser line scanners with 0.02 mm accuracy. In terms of performance, the algorithm can guarantee a frame rate of more than 60 frames per second (FPS) with the GPU memory footprint under 500 MB. In total, the proposed method can achieve a real-time and high-precision 3D reconstruction of a thin surface.


2013 ◽  
Vol 3 (4) ◽  
pp. 81-91 ◽  
Author(s):  
Sanjay P. Ahuja ◽  
Thomas F. Furman ◽  
Kerwin E. Roslie ◽  
Jared T. Wheeler

Amazon's Elastic Compute Cloud (EC2) Service is one of the leading public cloud service providers and offers many different levels of service. This paper looks into evaluating the memory, central processing unit (CPU), and input/output I/O performance of two different tiers of hardware offered through Amazon's EC2. Using three distinct types of system benchmarks, the performance of the micro spot instance and the M1 small instance are measured and compared. In order to examine the performance and scalability of the hardware, the virtual machines are set up in a cluster formation ranging from two to eight nodes. The results show that the scalability of the cloud is achieved by increasing resources when applicable. This paper also looks at the economic model and other cloud services offered by Amazon's EC2, Microsoft's Azure, and Google's App Engine.


2015 ◽  
Vol 137 (3) ◽  
Author(s):  
Soochan Lee ◽  
Patrick E. Phelan ◽  
Carole-Jean Wu

The increasing integration of high performance processors and dense circuits in current computing devices has produced high heat flux in localized areas (hot spots), which limits their performance and reliability. To control the hot spots on a central processing unit (CPU), many researchers have focused on active cooling methods such as thermoelectric coolers (TECs) to avoid thermal emergencies. This paper presents optimized thermoelectric modules on top of the CPU combined with a conventional air-cooling device to reduce the core temperature and at the same time harvest waste heat energy generated by the CPU. To control the temperature of the cores, we attach small-sized TECs to the CPU and use thermoelectric generators (TEGs) placed on the rest of the CPU to convert waste heat energy into electricity. This study investigates design alternatives with an analytical model considering the nonuniform temperature distribution based on two-node thermal networks. The results indicate that we are able to attain more energy from the TEGs than energy consumption for running the TECs. In other words, we can allow the harvested heat energy to be reused to power other components and reduce cores temperature simultaneously. Overall, the idea of simultaneous core cooling and waste heat harvesting using thermoelectric modules on a CPU is a promising method to control the problem of heat generation and to reduce energy consumption in a computing device.


Author(s):  
Raman Singh ◽  
Maninder Singh ◽  
Sheetal Garg ◽  
Ivan Perl ◽  
Olga Kalyonova ◽  
...  

In the popular field of cloud computing, millions of job requests arrive at the data centre for execution. The job of the data centre is to optimally allocate virtual machines (VMs) to these job requests in order to use resources efficiently. In the future smart cities, huge amount of job requests and data will be generated by the Internet of Things (IoT) devices which will influence the designing of optimum resource management of smart cloud environments. The present paper analyses the performance efficiency of the data centre with and without job request consolidation. First, the work load performance of the data centre was analysed without job request consolidation, exhibiting that the job requests to VM assignment was highly imbalanced, and only 5% of VMs were running with a load factor of more than 70%. Then, the technique for order of preference by similarity to ideal solution-based VM selection algorithm was applied, which was able to select the best VM using parameters such as the provisioned or available central processing unit capacity, provisioned or available memory capacity, and state of machine (running, hibernated, or available). The Bitbrains dataset consisting of 1750 VMs was used to analyse the performance of the proposed methodology. The analysis concluded that the proposed methodology was capable of serving all job requests using less than 24% VMs with improved load efficiency. The fewer number of VMs with an improved load factor guarantees energy saving and an increase in the overall running efficiency of the smart data centre environment.


2021 ◽  
Vol 3 (1) ◽  
pp. 21
Author(s):  
Rio Widodo ◽  
Imam Riadi

The openness of access to information raises various problems, including maintaining the validity and integrity of data, so a network security system is needed that can deal with potential threats that can occur quickly and accurately by utilizing an IDS (intrusion detection system). One of the IDS tools that are often used is Snort which works in real-time to monitor and detect the ongoing network by providing warnings and information on potential threats in the form of DoS attacks. DoS attacks run to exhaust the packet path by requesting packets to a target in large and continuous ways which results in increased usage of CPU (central processing unit), memory, and ethernet or WiFi networks. The snort IDS implementation can help provide accurate information on network security that you want to monitor because every communication that takes place in a network, every event that occurs and potential attacks that can paralyze the internet network are monitored by snort.


2017 ◽  
Vol 2017 ◽  
pp. 1-8 ◽  
Author(s):  
Sung-Woong Jo ◽  
Jong-Moon Chung

Video streaming service is one of the most popular applications for mobile users. However, mobile video streaming services consume a lot of energy, resulting in a reduced battery life. This is a critical problem that results in a degraded user’s quality of experience (QoE). Therefore, in this paper, a joint optimization scheme that controls both the central processing unit (CPU) and wireless networking of the video streaming process for improved energy efficiency on mobile devices is proposed. For this purpose, the energy consumption of the network interface and CPU is analyzed, and based on the energy consumption profile a joint optimization problem is formulated to maximize the energy efficiency of the mobile device. The proposed algorithm adaptively adjusts the number of chunks to be downloaded and decoded in each packet. Simulation results show that the proposed algorithm can effectively improve the energy efficiency when compared with the existing algorithms.


Sensors ◽  
2018 ◽  
Vol 19 (1) ◽  
pp. 78 ◽  
Author(s):  
Mikel Izal ◽  
Daniel Morató ◽  
Eduardo Magaña ◽  
Santiago García-Jiménez

The Internet of Things (IoT) contains sets of hundreds of thousands of network-enabled devices communicating with central controlling nodes or information collectors. The correct behaviour of these devices can be monitored by inspecting the traffic that they create. This passive monitoring methodology allows the detection of device failures or security breaches. However, the creation of hundreds of thousands of traffic time series in real time is not achievable without highly optimised algorithms. We herein compare three algorithms for time-series extraction from traffic captured in real time. We demonstrate how a single-core central processing unit (CPU) can extract more than three bidirectional traffic time series for each one of more than 20,000 IoT devices in real time using the algorithm DStries with recursive search. This proposal also enables the fast reconfiguration of the analysis computer when new IoT devices are added to the network.


Electronics ◽  
2020 ◽  
Vol 9 (7) ◽  
pp. 1069
Author(s):  
Minseon Kang ◽  
Yongseok Lee ◽  
Moonju Park

Recently, the application of machine learning on embedded systems has drawn interest in both the research community and industry because embedded systems located at the edge can produce a faster response and reduce network load. However, software implementation of neural networks on Central Processing Units (CPUs) is considered infeasible in embedded systems due to limited power supply. To accelerate AI processing, the many-core Graphics Processing Unit (GPU) has been a preferred device to the CPU. However, its energy efficiency is not still considered to be good enough for embedded systems. Among other approaches for machine learning on embedded systems, neuromorphic processing chips are expected to be less power-consuming and overcome the memory bottleneck. In this work, we implemented a pedestrian image detection system on an embedded device using a commercially available neuromorphic chip, NM500, which is based on NeuroMem technology. The NM500 processing time and the power consumption were measured as the number of chips was increased from one to seven, and they were compared to those of a multicore CPU system and a GPU-accelerated embedded system. The results show that NM500 is more efficient in terms of energy required to process data for both learning and classification than the GPU-accelerated system or the multicore CPU system. Additionally, limits and possible improvement of the current NM500 are identified based on the experimental results.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3068 ◽  
Author(s):  
Yen-Lin Chen ◽  
Ming-Feng Chang ◽  
Chao-Wei Yu ◽  
Xiu-Zhi Chen ◽  
Wen-Yew Liang

Dynamic voltage and frequency scaling (DVFS) is a well-known method for saving energy consumption. Several DVFS studies have applied learning-based methods to implement the DVFS prediction model instead of complicated mathematical models. This paper proposes a lightweight learning-directed DVFS method that involves using counter propagation networks to sense and classify the task behavior and predict the best voltage/frequency setting for the system. An intelligent adjustment mechanism for performance is also provided to users under various performance requirements. The comparative experimental results of the proposed algorithms and other competitive techniques are evaluated on the NVIDIA JETSON Tegra K1 multicore platform and Intel PXA270 embedded platforms. The results demonstrate that the learning-directed DVFS method can accurately predict the suitable central processing unit (CPU) frequency, given the runtime statistical information of a running program, and achieve an energy savings rate up to 42%. Through this method, users can easily achieve effective energy consumption and performance by specifying the factors of performance loss.


Sign in / Sign up

Export Citation Format

Share Document