external memory
Recently Published Documents


TOTAL DOCUMENTS

475
(FIVE YEARS 103)

H-INDEX

26
(FIVE YEARS 4)

2021 ◽  
Vol 46 (4) ◽  
pp. 1-35
Author(s):  
Shikha Singh ◽  
Prashant Pandey ◽  
Michael A. Bender ◽  
Jonathan W. Berry ◽  
Martín Farach-Colton ◽  
...  

Given an input stream S of size N , a ɸ-heavy hitter is an item that occurs at least ɸN times in S . The problem of finding heavy-hitters is extensively studied in the database literature. We study a real-time heavy-hitters variant in which an element must be reported shortly after we see its T = ɸ N-th occurrence (and hence it becomes a heavy hitter). We call this the Timely Event Detection ( TED ) Problem. The TED problem models the needs of many real-world monitoring systems, which demand accurate (i.e., no false negatives) and timely reporting of all events from large, high-speed streams with a low reporting threshold (high sensitivity). Like the classic heavy-hitters problem, solving the TED problem without false-positives requires large space (Ω (N) words). Thus in-RAM heavy-hitters algorithms typically sacrifice accuracy (i.e., allow false positives), sensitivity, or timeliness (i.e., use multiple passes). We show how to adapt heavy-hitters algorithms to external memory to solve the TED problem on large high-speed streams while guaranteeing accuracy, sensitivity, and timeliness. Our data structures are limited only by I/O-bandwidth (not latency) and support a tunable tradeoff between reporting delay and I/O overhead. With a small bounded reporting delay, our algorithms incur only a logarithmic I/O overhead. We implement and validate our data structures empirically using the Firehose streaming benchmark. Multi-threaded versions of our structures can scale to process 11M observations per second before becoming CPU bound. In comparison, a naive adaptation of the standard heavy-hitters algorithm to external memory would be limited by the storage device’s random I/O throughput, i.e., ≈100K observations per second.


2021 ◽  
Vol 42 (11) ◽  
pp. 2493-2502
Author(s):  
O. S. Aladyshev ◽  
E. A. Kiselev ◽  
A. V. Zakharchenko ◽  
B. M. Shabanov ◽  
G. I. Savin

2021 ◽  
pp. STOC19-87-STOC19-111
Author(s):  
Alireza Farhadi ◽  
MohammadTaghi Hajiaghayi ◽  
Kasper Green Larsen ◽  
Elaine Shi

2021 ◽  
Author(s):  
Dawa Dupont ◽  
Qianmeng Zhu ◽  
Sam Gilbert

Individuals have the option of remembering delayed intentions by storing them in internal memory or offloading them to an external store such as a diary or smartphone alert. How do we route intentions to the appropriate store, and what are the consequences of this? We report three experiments (two pre-registered) investigating the role of value. In Experiment 1, participants preferentially offloaded high-value intentions to the external environment. This improved memory for both high- and low-value content. Experiment 2 replicated the low-value memory enhancement even when only high-value intentions were offloaded. This suggests that internal memory is reallocated to low-value information once it is no longer required for high-value content. Experiment 3 showed that memory is better for low- than high-value content when external memory for high-value content fails. Therefore, individuals prioritize high-value information for external memory; consequently, they can be left with nothing but low-value information if it fails.


2021 ◽  
Author(s):  
Jian Meng ◽  
Shreyas Kolala Venkataramanaiah ◽  
Chuteng Zhou ◽  
Patrick Hansen ◽  
Paul Whatmough ◽  
...  

2021 ◽  
pp. 571-642
Author(s):  
Michael A. Arbib

The IBSEN model of Imagination in Brain Systems for Episodes and Navigation explores how the architect’s experience is brought to bear in the design of architecture by building on the VISIONS model of understanding a visual scene and the TAM-WGM model of navigation. IBSEN develops the idea that a building provides both views from various viewpoints and places where particular experiences can be felt, and actions can be performed. For this, the design must support a variety of scripts for both practical and contemplative action and the cognitive maps that relate places for them. Nodes from different maps may be combined as scripts are harmonized with respect to a specific embedding of places in three-dimensional space. The chapter examines the role of the hippocampus in episodic memory and imagination, and observes that memory and imagination, episodic or not, are construction processes. During design, long-term working memory links internal and external memory systems, providing priority access to (but not only to) memory fragments that have proved relevant to the current design process. The designer in some sense “inverts” imagined experiences and behaviors of users of the forthcoming building. As the book ends, the author notes that we are only at the beginning of new collaborative studies that take cog/neuroscience out of the lab and into the building and the street.


2021 ◽  
Author(s):  
NURUL ANISA

Perangkat keras atau yang disebut hardware merupakan semua bagian fisik dari komputer.Perangkat keras merupakan perangkat yang dapat dilihat dan diraba secara langsung sebagai bentuk output dari setiap proses sistem operasi dari sebuah komputer itu sendiri. Akan tetapi, untuk mendukung atau menjalankan suatu sistem operasi kerja perangkat tersebut tetap diperlukan software atau perangkat lunak.Dengan begitu dapat beroperasi dengan baik. Perangkat keras arau hardware dapat diklasifikasikan menjadi lima bagian yaitu, inputan/input device (perangkat input/masukan),process device (perangkat proses), output device (perangkat output/keluaran) , peripheral (perangkat tambahan/aksesoris) dan external memory (penyimpanan data).


2021 ◽  
Vol 26 ◽  
pp. 1-67
Author(s):  
Patrick Dinklage ◽  
Jonas Ellert ◽  
Johannes Fischer ◽  
Florian Kurpicz ◽  
Marvin Löbel

We present new sequential and parallel algorithms for wavelet tree construction based on a new bottom-up technique. This technique makes use of the structure of the wavelet trees—refining the characters represented in a node of the tree with increasing depth—in an opposite way, by first computing the leaves (most refined), and then propagating this information upwards to the root of the tree. We first describe new sequential algorithms, both in RAM and external memory. Based on these results, we adapt these algorithms to parallel computers, where we address both shared memory and distributed memory settings. In practice, all our algorithms outperform previous ones in both time and memory efficiency, because we can compute all auxiliary information solely based on the information we obtained from computing the leaves. Most of our algorithms are also adapted to the wavelet matrix , a variant that is particularly suited for large alphabets.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1399
Author(s):  
Taepyeong Kim ◽  
Sangun Park ◽  
Yongbeom Cho

In this study, a simple and effective memory system required for the implementation of an AI chip is proposed. To implement an AI chip, the use of internal or external memory is an essential factor, because the reading and writing of data in memory occurs a lot. Those memory systems that are currently used are large in design size and complex to implement in order to handle a high speed and a wide bandwidth. Therefore, depending on the AI application, there are cases where the circuit size of the memory system is larger than that of the AI core. In this study, SDRAM, which has a lower performance than the currently used memory system but does not have a problem in operating AI, was used and all circuits were implemented digitally for simple and efficient implementation. In particular, a delay controller was designed to reduce the error due to data skew inside the memory bus to ensure stability in reading and writing data. First of all, it verified the memory system based on the You Only Look Once (YOLO) algorithm in FPGA to confirm that the memory system proposed in AI works efficiently. Based on the proven memory system, we implemented a chip using Samsung Electronics’ 65 nm process and tested it. As a result, we designed a simple and efficient memory system for AI chip implementation and verified it with hardware.


Sign in / Sign up

Export Citation Format

Share Document