memory layout
Recently Published Documents


TOTAL DOCUMENTS

50
(FIVE YEARS 7)

H-INDEX

9
(FIVE YEARS 1)

Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1380
Author(s):  
Seungwon Jung ◽  
Seunghee Seo ◽  
Yeog Kim ◽  
Changhoon Lee

Physical memory acquisition is a prerequisite when performing memory forensics, referring to a set of techniques for acquiring and analyzing traces associated with user activity information, malware analysis, cyber incident response, and similar areas when the traces remain in the physical RAM. However, certain types of malware have applied anti-memory forensics techniques to evade memory analysis strategies or to make the acquisition process impossible. To disturb the acquisition process of physical memory, an attacker hooks the kernel API, which returns a map of the physical memory spaces, and modifies the return value of the API, specifically that typically used by memory acquisition tools. Moreover, an attacker modifies the kernel object referenced by the kernel API. This causes the system to crash during the memory acquisition process or causes the memory acquisition tools to incorrectly proceed with the acquisition. Even with a modification of one byte, called a one-byte modification attack, some tools fail to acquire memory. Therefore, specialized countermeasure techniques are needed for these anti-memory forensics techniques. In this paper, we propose a memory layout acquisition method which is robust to kernel API hooking and the one-byte modification attack on NumberOfRuns, the kernel object used to construct the memory layout in Windows. The proposed acquisition method directly accesses the memory, extracts the byte array, and parses it in the form of a memory layout. When we access the memory, we extract the _PHYSICAL_MEMORY_DESCRIPTOR structure, which is the basis of the memory layout without using the existing memory layout acquisition API. Furthermore, we propose a verification method that selects a reliable memory layout. We realize the verification method by comparing NumberOfRuns and the memory layout acquired via the kernel API, the registry, and the proposed method. The proposed verification method guarantees the reliability of the memory layout and helps secure memory image acquisition through a comparative verification with existing memory layout acquisition methods. We also conduct experiments to prove that the proposed method is resistant to anti-memory forensics techniques, confirming that there are no significant differences in time compared to the existing tools.


Author(s):  
Muhammad Attahir Jibril ◽  
Philipp Götze ◽  
David Broneske ◽  
Kai-Uwe Sattler

AbstractAfter the introduction of Persistent Memory in the form of Intel’s Optane DC Persistent Memory on the market in 2019, it has found its way into manifold applications and systems. As Google and other cloud infrastructure providers are starting to incorporate Persistent Memory into their portfolio, it is only logical that cloud applications have to exploit its inherent properties. Persistent Memory can serve as a DRAM substitute, but guarantees persistence at the cost of compromised read/write performance compared to standard DRAM. These properties particularly affect the performance of index structures, since they are subject to frequent updates and queries. However, adapting each and every index structure to exploit the properties of Persistent Memory is tedious. Hence, we require a general technique that hides this access gap, e.g., by using DRAM caching strategies. To exploit Persistent Memory properties for analytical index structures, we propose selective caching. It is based on a mixture of dynamic and static caching of tree nodes in DRAM to reach near-DRAM access speeds for index structures. In this paper, we evaluate selective caching on the OLAP-optimized main-memory index structure Elf, because its memory layout allows for an easy caching. Our experiments show that if configured well, selective caching with a suitable replacement strategy can keep pace with pure DRAM storage of Elf while guaranteeing persistence. These results are also reflected when selective caching is used for parallel workloads.


Author(s):  
Osamu Watanabe ◽  
Yuta Hougi ◽  
Kazuhiko Komatsu ◽  
Masayuki Sato ◽  
Akihiro Musa ◽  
...  
Keyword(s):  

Because of the system variations of tiny functional size, enhanced adjustment functions in bits are becoming more and more vital, as technology nodes proceed to scale, primary memory encounter increased energy with output and time impacts such as crosstalk, challenges in consumption and reliability. We suggest a sustainable strategy to error correction in deeply-scale memories in order to tackle increasing failure rates owing to issues. SRAM is frequently used for high-speed memory apps like cache. The SRAM memory layout (SRAM) main parameter is power consumption. SRAM cells are power starving and bad in traditional models. The low-power cell designs for power consumption, delay write and the power retard product has been analyzed in this paper. The most recent upgrade VLSI, primarily in the volatile memory form of the SRAM set built into the PMOS & NMOS series and which is to be included in the cache segment on the CPU and in microcontrollers that are electronically energy-related, and now we have improved the SRAM Array challenges


2018 ◽  
Vol 28 (03) ◽  
pp. 1850009 ◽  
Author(s):  
Stefan Kronawitter ◽  
Sebastian Kuckuk ◽  
Harald Köstler ◽  
Christian Lengauer

Performance optimizations should focus not only on the computations of an application, but also on the internal data layout. A well-known problem is whether a struct of arrays or an array of structs results in a higher performance for a particular application. Even though the switch from the one to the other is fairly simple to implement, testing both transformations can become laborious and error-prone. Additionally, there are more complex data layout transformations, such as a color splitting for multi-color kernels in the domain of stencil codes, that are manually difficult. As a remedy, we propose new flexible layout transformation statements for our domain-specific language ExaSlang that support arbitrary affine transformations. Since our code generator applies them automatically to the generated code, these statements enable the simple adaptation of the data layout without the need for any other modifications of the application code. This constitutes a big advance in the ease of testing and evaluating different memory layout schemes in order to identify the best.


2018 ◽  
Vol 175 ◽  
pp. 02001 ◽  
Author(s):  
Stephan Dürr

Experiences with optimizing the matrix-times-vector application of the Brillouin operator on the Intel KNL processor are reported. Without adjustments to the memory layout, performance figures of 360 Gflop/s in single and 270 Gflop/s in double precision are observed. This is with Nc = 3 colors, Nv = 12 right-hand-sides, Nthr = 256 threads, on lattices of size 323 × 64, using exclusively OMP pragmas. Interestingly, the same routine performs quite well on Intel Core i7 architectures, too. Some observations on the much harderWilson fermion matrix-times-vector optimization problem are added.


Sign in / Sign up

Export Citation Format

Share Document