memory models
Recently Published Documents


TOTAL DOCUMENTS

440
(FIVE YEARS 61)

H-INDEX

36
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Zhisheng (Edward) Wen

Given these obvious gaps in the research literature, we thus set out to compile this comprehensive handbook, with the goal in mind to fill up all these lacunae from previous research. Furthermore, we also aim for theoretical ingenuity and empirical robustness in our individual chapter reviews and devote independent sections to key areas of foundational theories, including working memory models and measures in cognitive psychology, as well as incorporating working memory within well-established linguistic theories and processing frameworks. As far as we know, much of these have not been done before. As such, we are hoping that the comprehensive coverage of key topics in all these essential areas in our handbook will benefit researchers and students not just from psychology and linguistics, but also readers from all other related fields of cognitive sciences to draw insights and inspirations from the chapters herein.


2021 ◽  
Vol 39 (2) ◽  
pp. 118-144
Author(s):  
Andrew Goldman ◽  
Peter M. C. Harrison ◽  
Tyreek Jackson ◽  
Marcus T. Pearce

Electroencephalographic responses to unexpected musical events allow researchers to test listeners’ internal models of syntax. One major challenge is dissociating cognitive syntactic violations—based on the abstract identity of a particular musical structure—from unexpected acoustic features. Despite careful controls in past studies, recent work by Bigand, Delbe, Poulin-Carronnat, Leman, and Tillmann (2014) has argued that ERP findings attributed to cognitive surprisal cannot be unequivocally separated from sensory surprisal. Here we report a novel EEG paradigm that uses three auditory short-term memory models and one cognitive model to predict surprisal as indexed by several ERP components (ERAN, N5, P600, and P3a), directly comparing sensory and cognitive contributions. Our paradigm parameterizes a large set of stimuli rather than using categorically “high” and “low” surprisal conditions, addressing issues with past work in which participants may learn where to expect violations and may be biased by local context. The cognitive model (Harrison & Pearce, 2018) predicted higher P3a amplitudes, as did Leman’s (2000) model, indicating both sensory and cognitive contributions to expectation violation. However, no model predicted ERAN, N5, or P600 amplitudes, raising questions about whether traditional interpretations of these ERP components generalize to broader collections of stimuli or rather are limited to less naturalistic stimuli.


Author(s):  
Sadegh Dalvandi ◽  
Brijesh Dongol ◽  
Simon Doherty ◽  
Heike Wehrheim

AbstractWeak memory presents a new challenge for program verification and has resulted in the development of a variety of specialised logics. For C11-style memory models, our previous work has shown that it is possible to extend Hoare logic and Owicki–Gries reasoning to verify correctness of weak memory programs. The technique introduces a set of high-level assertions over C11 states together with a set of basic Hoare-style axioms over atomic weak memory statements (e.g. reads/writes), but retains all other standard proof obligations for compound statements. This paper takes this line of work further by introducing the first deductive verification environment in Isabelle/HOL for C11-like weak memory programs. This verification environment is built on the Nipkow and Nieto’s encoding of Owicki–Gries in the Isabelle theorem prover. We exemplify our techniques over several litmus tests from the literature and two non-trivial examples: Peterson’s algorithm and a read–copy–update algorithm adapted for C11. For the examples we consider, the proof outlines can be automatically discharged using the existing Isabelle tactics developed by Nipkow and Nieto. The benefit here is that programs can be written using a familiar pseudocode syntax with assertions embedded directly into the program.


2021 ◽  
Vol 47 (6) ◽  
pp. 439-456
Author(s):  
E. Moiseenko ◽  
A. Podkopaev ◽  
D. Koznov

2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-30
Author(s):  
Truc Lam Bui ◽  
Krishnendu Chatterjee ◽  
Tushar Gautam ◽  
Andreas Pavlogiannis ◽  
Viktor Toman

The verification of concurrent programs remains an open challenge due to the non-determinism in inter-process communication. One recurring algorithmic problem in this challenge is the consistency verification of concurrent executions. In particular, consistency verification under a reads-from map allows to compute the reads-from (RF) equivalence between concurrent traces, with direct applications to areas such as Stateless Model Checking (SMC). Importantly, the RF equivalence was recently shown to be coarser than the standard Mazurkiewicz equivalence, leading to impressive scalability improvements for SMC under SC (sequential consistency). However, for the relaxed memory models of TSO and PSO (total/partial store order), the algorithmic problem of deciding the RF equivalence, as well as its impact on SMC, has been elusive. In this work we solve the algorithmic problem of consistency verification for the TSO and PSO memory models given a reads-from map, denoted VTSO-rf and VPSO-rf, respectively. For an execution of n events over k threads and d variables, we establish novel bounds that scale as n k +1 for TSO and as n k +1 · min( n k 2 , 2 k · d ) for PSO. Moreover, based on our solution to these problems, we develop an SMC algorithm under TSO and PSO that uses the RF equivalence. The algorithm is exploration-optimal , in the sense that it is guaranteed to explore each class of the RF partitioning exactly once, and spends polynomial time per class when k is bounded. Finally, we implement all our algorithms in the SMC tool Nidhugg, and perform a large number of experiments over benchmarks from existing literature. Our experimental results show that our algorithms for VTSO-rf and VPSO-rf provide significant scalability improvements over standard alternatives. Moreover, when used for SMC, the RF partitioning is often much coarser than the standard Shasha-Snir partitioning for TSO/PSO, which yields a significant speedup in the model checking task.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-27
Author(s):  
Ori Lahav ◽  
Egor Namakonov ◽  
Jonas Oberhauser ◽  
Anton Podkopaev ◽  
Viktor Vafeiadis

Liveness properties, such as termination, of even the simplest shared-memory concurrent programs under sequential consistency typically require some fairness assumptions about the scheduler. Under weak memory models, we observe that the standard notions of thread fairness are insufficient, and an additional fairness property, which we call memory fairness, is needed. In this paper, we propose a uniform definition for memory fairness that can be integrated into any declarative memory model enforcing acyclicity of the union of the program order and the reads-from relation. For the well-known models, SC, x86-TSO, RA, and StrongCOH, that have equivalent operational and declarative presentations, we show that our declarative memory fairness condition is equivalent to an intuitive model-specific operational notion of memory fairness, which requires the memory system to fairly execute its internal propagation steps. Our fairness condition preserves the correctness of local transformations and the compilation scheme from RC11 to x86-TSO, and also enables the first formal proofs of termination of mutual exclusion lock implementations under declarative weak memory models.


Author(s):  
Tommi Sottinen ◽  
Elisa Alòs ◽  
Ehsan Azmoodeh ◽  
Giulia Di Nunno

2021 ◽  
Author(s):  
Ricardo Salmon

It is shown that associative memory networks are capable of solving immediate and general reinforcement learning (RL) problems by combining techniques from associative neural networks and reinforcement learning and in particular Q-learning. The modified model is shown to outperform native RL techniques on a stochastic grid world task by developing correct policies. In addition, we formulated an analogous method to add feature extraction as dimensional reduction and eligibility traces as another mechanism to help solve the credit assignment problem. The network contrary to pure RL methods is based on associative memory principles such as distribution of information, pattern completion, Hebbian learning, and noise tolerance (limit cycles, one to many associations, chaos, etc). Because of this, it can be argued that the model possesses more cognitive explanative power than other RL or hybrid models. It may be an effective tool for bridging the gap between biological memory models and computational memory models.


2021 ◽  
Author(s):  
Ricardo Salmon

It is shown that associative memory networks are capable of solving immediate and general reinforcement learning (RL) problems by combining techniques from associative neural networks and reinforcement learning and in particular Q-learning. The modified model is shown to outperform native RL techniques on a stochastic grid world task by developing correct policies. In addition, we formulated an analogous method to add feature extraction as dimensional reduction and eligibility traces as another mechanism to help solve the credit assignment problem. The network contrary to pure RL methods is based on associative memory principles such as distribution of information, pattern completion, Hebbian learning, and noise tolerance (limit cycles, one to many associations, chaos, etc). Because of this, it can be argued that the model possesses more cognitive explanative power than other RL or hybrid models. It may be an effective tool for bridging the gap between biological memory models and computational memory models.


Sign in / Sign up

Export Citation Format

Share Document