memory buffer
Recently Published Documents


TOTAL DOCUMENTS

44
(FIVE YEARS 14)

H-INDEX

9
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Christine Soh ◽  
Charles Yang

A simple memory component is amended to local (“Pursuit”; Stevens, Gleitman, Trueswell, and Yang (2017)) and globa l(e.g., Yu and Smith (2007); Fazly, Alishahi, and Stevenson (2010)) models of cross-situational word learning. Only a finite (and small) number of words can be concurrently learned; successfully learned words are removed from the memory buffer and stored in the lexicon. The memory buffer improves the empirical coverage for both local and global learn-ing models. However, the complex task of homophone learning (Yurovsky & Yu, 2008) proves a more decisive advantage for the local model (dubbed Memory Bound Pursuit; MBP). Implications and limitations of these results are discussed.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Damian K. F. Pang ◽  
Stamatis Elntib

AbstractA growing body of evidence indicates that information can be stored even in the absence of conscious awareness. Despite these findings, unconscious memory is still poorly understood with limited evidence for unconscious iconic memory storage. Here we show that strongly masked visual data can be stored and accumulate to elicit clear perception. We used a repetition method across a wide range of conditions (Experiment 1) and a more focused follow-up experiment with enhanced masking conditions (Experiment 2). Information was stored despite being masked, demonstrating that masking did not erase or overwrite memory traces but limited perception. We examined the temporal properties and found that stored information followed a gradual but rapid decay. Extraction of meaningful information was severely impaired after 300 ms, and most data was lost after 700 ms. Our findings are congruent with theories of consciousness that are based on an integration of subliminal information and support theoretical predictions based on the global workspace theory of consciousness, especially the existence of an implicit iconic memory buffer store.


2021 ◽  
Vol 14 (6) ◽  
pp. 1019-1032
Author(s):  
Yuanyuan Sun ◽  
Sheng Wang ◽  
Huorong Li ◽  
Feifei Li

Data confidentiality is one of the biggest concerns that hinders enterprise customers from moving their workloads to the cloud. Thanks to the trusted execution environment (TEE), it is now feasible to build encrypted databases in the enclave that can process customers' data while keeping it confidential to the cloud. Though some enclave-based encrypted databases emerge recently, there remains a large unexplored area in between about how confidentiality can be achieved in different ways and what influences are implied by them. In this paper, we first provide a broad exploration of possible design choices in building encrypted database storage engines, rendering trade-offs in security, performance and functionality. We observe that choices on different dimensions can be independent and their combination determines the overall trade-off of the entire storage. We then propose Enclage , an encrypted storage engine that makes practical trade-offs. It adopts many enclave-native designs, such as page-level encryption, reduced enclave interaction, and hierarchical memory buffer, which offer high-level security guarantee and high performance at the same time. To make better use of the limited enclave memory, we derive the optimal page size in enclave and adopt delta decryption to access large data pages with low cost. Our experiments show that Enclage outperforms the baseline, a common storage design in many encrypted databases, by over 13x in throughput and about 5x in storage savings.


Author(s):  
Il'ya V. Artemov ◽  
◽  
Maksim N. Konnov ◽  
Dmitriy V. Patunin ◽  
◽  
...  
Keyword(s):  

2021 ◽  
pp. 1-1
Author(s):  
Cunlu Li ◽  
Dezun Dong ◽  
Xiangke Liao ◽  
John Kim

2020 ◽  
Author(s):  
Ruhai Zhang ◽  
Feifei Li ◽  
Shan Jiang ◽  
Kexin Zhao ◽  
Chi Zhang ◽  
...  

The current research aimed to investigate the role that prior knowledge played in what structures could be implicitly learnt and also the nature of the memory buffer required for learning such structures. It is already established that people can implicitly learn to detect an inversion symmetry (i.e. a cross-serial dependency) based on linguistic tone types. The present study investigated the ability of the Simple Recurrent Network (SRN) to explain implicit learning of such recursive structures. We found that the SRN learnt the symmetry over tone types more effectively when given prior knowledge of the tone types (i.e. of the two categories tones were grouped into). The role of prior knowledge of the tone types in learning the inversion symmetry was tested on people: When an arbitrary classification of tones was used (i.e. in the absence of prior knowledge of categories), participants did not implicitly learn the inversion symmetry (unlike when they did have prior knowledge of the tone types). These results indicate the importance of prior knowledge in implicit learning of symmetrical structures. We further contrasted the learning of inversion symmetry and retrograde symmetry and showed that inversion was learnt more easily than retrograde by the SRN, matching our previous findings with people, thus showing that the type of memory buffer used in the SRN is suitable for modeling the implicit learning of symmetry in people.


2020 ◽  
Vol 10 (12) ◽  
pp. 4341
Author(s):  
Kyusik Kim ◽  
Seongmin Kim ◽  
Taeseok Kim

Differentiated I/O services for applications with their own requirements are very important for user satisfaction. Nonvolatile memory express (NVMe) solid-state drive (SSD) architecture can improve the I/O bandwidth with its numerous submission queues, but the quality of service (QoS) of each I/O request is never guaranteed. In particular, if many I/O requests are pending in the submission queues due to a bursty I/O workload, urgent I/O requests can be delayed, and consequently, the QoS requirements of applications that need fast service cannot be met. This paper presents a scheme that handles urgent I/O requests without delay even if there are many pending I/O requests. Since the pending I/O requests in the submission queues cannot be controlled by the host, the host memory buffer (HMB), which is part of the DRAM of the host that can be accessed from the controller, is used to process urgent I/O requests. Instead of sending urgent I/O requests into the SSDs through legacy I/O paths, the latency is removed by directly inserting them into the HMB. Emulator experiments demonstrated that the proposed scheme could reduce the average and tail latencies by up to 99% and 86%, respectively.


Sign in / Sign up

Export Citation Format

Share Document