scholarly journals Implementation of Meltdown Attack Simulation for Cybersecurity Awareness Material

2021 ◽  
Vol 7 (1) ◽  
pp. 6-13
Author(s):  
Eka Chattra ◽  
Obrin Candra Brillyant

One of the rising risk in cybersecurity is an attack on cyber physical system. Today’s computer systems has evolve through the development of processor technology, namely by the use of optimization techniques such as out-of-order execution. Using this technique, processors can improve computing system performance without sacrificing manufacture processes. However, the use of these optimization techniques has vulnerabilities, especially on Intel processors. The vulnerability is in the form of data exfiltration in the cache memory that can be exploit by an attack. Meltdown is an exploit attack that takes advantage of such vulnerabilities in modern Intel processors. This vulnerability can be used to extract data that is processed on that specific computer device using said processors, such as passwords, messages, or other credentials. In this paper, we use qualitative research which aims to describe a simulation approach with experience meltdown attack in a safe environment with applied a known meltdown attack scheme and source code to simulate the attack on an Intel Core i7 platform running Linux OS. Then we modified the source code to prove the concept that the Meltdown attack can extract data on devices using Intel processors without consent from the authorized user.

2021 ◽  
Vol 11 (18) ◽  
pp. 8476
Author(s):  
June Choi ◽  
Jaehyun Lee ◽  
Jik-Soo Kim ◽  
Jaehwan Lee

In this paper, we present several optimization strategies that can improve the overall performance of the distributed in-memory computing system, “Apache Spark”. Despite its distributed memory management capability for iterative jobs and intermediate data, Spark has a significant performance degradation problem when the available amount of main memory (DRAM, typically used for data caching) is limited. To address this problem, we leverage an SSD (solid-state drive) to supplement the lack of main memory bandwidth. Specifically, we present an effective optimization methodology for Apache Spark by collectively investigating the effects of changing the capacity fraction ratios of the shuffle and storage spaces in the “Spark JVM Heap Configuration” and applying different “RDD Caching Policies” (e.g., SSD-backed memory caching). Our extensive experimental results show that by utilizing the proposed optimization techniques, we can improve the overall performance by up to 42%.


2011 ◽  
pp. 86-111
Author(s):  
Florin Pop

This chapter will present the scheduling mechanism in distributed systems with direct application in grids. The resource heterogeneity, the size and number of tasks, the variety of policies, and the high number of constraints are some of the main characteristics that contribute to this complexity. The necessity of scheduling in grid is sustained by the increasing of number of users and applications. The design of scheduling algorithms for a heterogeneous computing system interconnected with an arbitrary communication network is one of the actual concerns in distributed system research. The main concerns presented in the chapter refers to general presentation of scheduling for grid systems, specific requirements of scheduling in grids, critical analysis of existing methods and algorithms for grid schedulers, scheduling policies, fault tolerance in scheduling process in grid environments, scheduling models and algorithms and optimization techniques for grid scheduling.


2014 ◽  
Vol 13s7 ◽  
pp. CIN.S16349 ◽  
Author(s):  
Sungyoung Lee ◽  
Min-Seok Kwon ◽  
Taesung Park

In genome-wide association studies (GWAS), regression analysis has been most commonly used to establish an association between a phenotype and genetic variants, such as single nucleotide polymorphism (SNP). However, most applications of regression analysis have been restricted to the investigation of single marker because of the large computational burden. Thus, there have been limited applications of regression analysis to multiple SNPs, including gene–gene interaction (GGI) in large-scale GWAS data. In order to overcome this limitation, we propose CARAT-GxG, a GPU computing system-oriented toolkit, for performing regression analysis with GGI using CUDA (compute unified device architecture). Compared to other methods, CARAT-GxG achieved almost 700-fold execution speed and delivered highly reliable results through our GPU-specific optimization techniques. In addition, it was possible to achieve almost-linear speed acceleration with the application of a GPU computing system, which is implemented by the TORQUE Resource Manager. We expect that CARAT-GxG will enable large-scale regression analysis with GGI for GWAS data.


PETRO ◽  
2018 ◽  
Vol 5 (2) ◽  
Author(s):  
Kartika Fajarwati Hartono ◽  
Muhammad Taufiq Fatthadin ◽  
Reno Pratiwi

<p>Now days, one of the greatest challenges in gas development is transport the fluid especially multiphase fluid to long distances and multiphase pipeline to sell point. Yet, a challenge to transport multiphase fluid is how to operate the systemsin operating a long distance, large diameter, and multiphase pipeline.The operating system include how to manage high liquid holdup, mainly built during low production rate (turn down rate) periods especially during transient operations such as restart and ramp-up, so that liquid surge arriving onshore will not exceed the liquid handling capacity of the slug catcher. The objective of this research is to predict liquid trapped in pipeline network by analysis turn down rate in order to determine minimal gas production rate for stable operation. This research was carried out by two steps: Simulation Approach and Optimization Techniques. Simulation approach include define fluid composition and built pipeline network configuration while optimization technique include conduct scenario for turn down rate. The fluid composition from wellhead to manifold is wet gas. First scenario and Second scenario of turndown rate yield minimum gas rate for stable operation. The pipeline has to be operated above 600 MMSCFD from peak gas production rate is 1200 MMSCFD (A-Manifold Mainline) and 60 MMSCFD from peak gas production rate is 150 MMSCFD for D-Manifold Mainline.</p>


Author(s):  
M. R. Hannan ◽  
G. E. Johnson

Abstract The application of optimization to the design of structures where the ability to sustain a large wind load is considered. This investigation was an outgrowth of research intended to increase the capabilities of engineers who design such structures as highway signs and elevated steam lines. A second objective was to see if optimization techniques could successfully be applied to this class of problems. The objective was to minimize the mass of the footing needed to support the structure. A model of a reinforced concrete footing which supports the structure is given. This model includes constraints which assure that the structure won’t overturn and that stresses in the footing and soil don’t exceed the strength of the respective materials. Optimization software (a conjugate gradient method) uses this model to find a design which minimizes the mass of the footing. Results for various loadings and configurations are presented. Analysis of the results indicate that for lightly loaded structures the optimal footing design tends to be a piling (large depth relative to length) while for a relatively heavily loaded structure the optimal design is more of a plate (large length relative to depth). A copy of the Fortran source code used to model the system and the optimization algorithm is available from either author.


2018 ◽  
Vol 7 (2.19) ◽  
pp. 80
Author(s):  
G D.Kesavan ◽  
P N.Karthikayan

Using cache memory the overall memory access time to fetch data gets reduced. As use of cache memory is related to a    system's performance, the caching process should take less time. To speed up caching process, there are many cache optimization techniques available. Some of the cache optimization process are Reducing Miss Rate, Reducing Miss Penalty, Re-ducing the time to hit in the cache etc. Re-cent advancement paved way for compressing data in cache, accessing recent data use pat-tern etc. All the techniques focus on increasing cache capacity or replacement policies in cache resulting in more hit ratio. There are many cache related compression and optimization techniques available which address only capacity and replacement related              optimization and their related issues. This paper deals with scheduling the requests of cache memory as per compressed cache organization. So that cache searching and indexing speed gets reduced considerably and service the request in a faster manner. For capacity and replacement improvements Dictionary sharing based caching is used. Through this scheme multiple requests are foreseen using pre-fetcher and are searched as per cache organization, promoting easier indexing process.The benefit comes from both compressed storage and also easier storage ac-cess. 


2010 ◽  
Vol 2010 ◽  
pp. 1-7
Author(s):  
Sudarshan K. Srinivasan

We develop two optimization techniques,flush-machineand collapsed flushing, to improve the efficiency of automatic refinement-abased verification of out-of-order (ooo) processor models. Refinement is a notion of equivalence that can be used to check that an ooo processor correctly implements all behaviors of its instruction set architecture (ISA), including deadlock detection. The optimization techniques work by reducing the computational complexity of the refinement map, a function central to refinement proofs that maps ooo processor model states to ISA states. This has a direct impact on the efficiency of verification, which is studied using 23 ooo processor models.Flush-machine, is a novel optimization technique. Collapsed flushing has been employed previously in the context of in-order processors. We show how to apply collapsed flushing for ooo processor models. Using both the optimizations together, we can handle 9 ooo models that could not be verified using standard flushing. Also, the optimizations provided a speed up of 23.29 over standard flushing.


Sign in / Sign up

Export Citation Format

Share Document