Memory Size

2020 ◽  
pp. 175-186
Keyword(s):  
Author(s):  
A. Barkalov ◽  
M. Kolopienczyk ◽  
L. Titarenko
Keyword(s):  

Author(s):  
Dharmarajan K ◽  
M. A. Dorairangaswamy

In this paper, the student navigation paths and student or visitor interested page is identified. Student navigation interest pattern mining contains both the frequently navigation path based on webpage memory size and session length .Relatively comparing access proportion of viewing time and selective page size, preference can be used for mining student learning pattern instead of interested subject. In order to identify Preferred Navigation Paths, an efficient algorithm for Visitor Access Matrix (VAM) by the page to page transition probabilities statistics of all visitor behaviors is introduced in this paper. Second, we propose an efficient algorithm for Selection and Time Preference (SATP) to identify the preference of web pages by viewing time. Third, the user interested page would calculate by both memory size and session. In this way we proposed the Preference of page content size and session identifier algorithm. The performance of the proposed algorithms is evaluated and the algorithms can determine preferred navigation path efficiently. The experimental results show the accuracy and scalability of the algorithms. This approach may be helpful in E-learning, E-business, such as web personalization and website designer


2019 ◽  
Vol 2019 ◽  
pp. 1-11
Author(s):  
Younghun Park ◽  
Minwoo Gu ◽  
Sungyong Park

Advances in virtualization technology have enabled multiple virtual machines (VMs) to share resources in a physical machine (PM). With the widespread use of graphics-intensive applications, such as two-dimensional (2D) or 3D rendering, many graphics processing unit (GPU) virtualization solutions have been proposed to provide high-performance GPU services in a virtualized environment. Although elasticity is one of the major benefits in this environment, the allocation of GPU memory is still static in the sense that after the GPU memory is allocated to a VM, it is not possible to change the memory size at runtime. This causes underutilization of GPU memory or performance degradation of a GPU application due to the lack of GPU memory when an application requires a large amount of GPU memory. In this paper, we propose a GPU memory ballooning solution called gBalloon that dynamically adjusts the GPU memory size at runtime according to the GPU memory requirement of each VM and the GPU memory sharing overhead. The gBalloon extends the GPU memory size of a VM by detecting performance degradation due to the lack of GPU memory. The gBalloon also reduces the GPU memory size when the overcommitted or underutilized GPU memory of a VM creates additional overhead for the GPU context switch or the CPU load due to GPU memory sharing among the VMs. We implemented the gBalloon by modifying the gVirt, a full GPU virtualization solution for Intel’s integrated GPUs. Benchmarking results show that the gBalloon dynamically adjusts the GPU memory size at runtime, which improves the performance by up to 8% against the gVirt with 384 MB of high global graphics memory and 32% against the gVirt with 1024 MB of high global graphics memory.


Sign in / Sign up

Export Citation Format

Share Document