scholarly journals What Do Cows Drink? A Systems Factorial Technology Account of Processing Architecture in Memory Intersection Problems

2019 ◽  
Author(s):  
Zachary L Howard ◽  
Bianca Belevski ◽  
Ami Eidels ◽  
Simon Dennis

It has long been known that cues can be used to improve performance on memory recall tasks. There is evidence to suggest additional cues provide further benefit, presumably by narrowing the search space. Problems that require integration of two or more cues, alternately referred to as memory intersections or multiply constrained memory problems, could be approached using several strategies, namely serial or parallel consideration of cues. The type of strategy implicated is essential information for the development of theories of memory, yet evidence to date has been inconclusive. Using a novel application of the powerful Systems Factorial Technology (Townsend & Nozawa, 1995) we find strong evidence that participants use two cues in parallel in free recall tasks - a finding that contradicts two recent publications in this area. We then provide evidence from a related recognition task showing that while most participants also use a parallel strategy in that paradigm, a reliable subset of participants used a serial strategy. Our findings suggest a theoretically meaningful distinction between participants strategies in recall and recognition based intersection memory tasks, and also highlight the importance of tightly controlled methodological and analytic frameworks to overcome issues of serial/parallel model mimicry.

2018 ◽  
Author(s):  
Tim Schoof ◽  
Pamela Souza

Objective: Older hearing-impaired adults typically experience difficulties understanding speech in noise. Most hearing aids address this issue using digital noise reduction. While noise reduction does not necessarily improve speech recognition, it may reduce the resources required to process the speech signal. Those available resources may, in turn, aid the ability to perform another task while listening to speech (i.e., multitasking). This study examined to what extent changing the strength of digital noise reduction in hearing aids affects the ability to multitask. Design: Multitasking was measured using a dual-task paradigm, combining a speech recognition task and a visual monitoring task. The speech recognition task involved sentence recognition in the presence of six-talker babble at signal-to-noise ratios (SNRs) of 2 and 7 dB. Participants were fit with commercially-available hearing aids programmed under three noise reduction settings: off, mild, strong. Study sample: 18 hearing-impaired older adults. Results: There were no effects of noise reduction on the ability to multitask, or on the ability to recognize speech in noise. Conclusions: Adjustment of noise reduction settings in the clinic may not invariably improve performance for some tasks.


2020 ◽  
Vol 34 (06) ◽  
pp. 9908-9915
Author(s):  
Sarah Keren ◽  
Haifeng Xu ◽  
Kofi Kwapong ◽  
David Parkes ◽  
Barbara Grosz

We extend goal recognition design to account for partially informed agents. In particular, we consider a two-agent setting in which one agent, the actor, seeks to achieve a goal but has only incomplete information about the environment. The second agent, the recognizer, has perfect information and aims to recognize the actor's goal from its behavior as quickly as possible. As a one-time offline intervention and with the objective of facilitating the recognition task, the recognizer can selectively reveal information to the actor. The problem of selecting which information to reveal, which we call information shaping, is challenging not only because the space of information shaping options may be large, but also because more information revelation need not make it easier to recognize an agent's goal. We formally define this problem, and suggest a pruning approach for efficiently searching the search space. We demonstrate the effectiveness and efficiency of the suggested method on standard benchmarks.


Cognition ◽  
2020 ◽  
Vol 202 ◽  
pp. 104294
Author(s):  
Zachary L. Howard ◽  
Bianca Belevski ◽  
Ami Eidels ◽  
Simon Dennis

2020 ◽  
Author(s):  
Haiyuan Yang ◽  
Daniel R. Little ◽  
Ami Eidels ◽  
James T. Townsend

Systems Factorial Technology (SFT) is a theoretically-derived methodology that allows for strong inferences to be made about the underlying processing architecture (e.g., whether processing occurs in a pooled, coactive fashion or independently, in serial or in parallel). Measures of mental architecture using SFT have been restricted to the use of error-free response times. In this paper, through formal proofs and demonstrations, we extended the measure of architecture, the survivor interaction contrast (SIC), to response times conditioned on whether they are correct or incorrect. We show that so long as an ordering relation (between stimulus conditions of different difficulty) is preserved, unique conditional SIC predictions are found for several classes of processing models. We further prove that this ordering relation holds for the popular Wiener diffusion model for both correct and error RTs but fails under some instantiations of a Poisson counter model.


Author(s):  
Zeqi Tan ◽  
Yongliang Shen ◽  
Shuai Zhang ◽  
Weiming Lu ◽  
Yueting Zhuang

Named entity recognition (NER) is a widely studied task in natural language processing. Recently, a growing number of studies have focused on the nested NER. The span-based methods, considering the entity recognition as a span classification task, can deal with nested entities naturally. But they suffer from the huge search space and the lack of interactions between entities. To address these issues, we propose a novel sequence-to-set neural network for nested NER. Instead of specifying candidate spans in advance, we provide a fixed set of learnable vectors to learn the patterns of the valuable spans. We utilize a non-autoregressive decoder to predict the final set of entities in one pass, in which we are able to capture dependencies between entities. Compared with the sequence-to-sequence method, our model is more suitable for such unordered recognition task as it is insensitive to the label order. In addition, we utilize the loss function based on bipartite matching to compute the overall training loss. Experimental results show that our proposed model achieves state-of-the-art on three nested NER corpora: ACE 2004, ACE 2005 and KBP 2017. The code is available at https://github.com/zqtan1024/sequence-to-set.


Author(s):  
Dr. Sirisha Velampalli

Graphs are used to solve many problems in the real world.At the same time size of the graphs presents a complexscenario to analyze essential information that they contain.Graph compression is used to understand high level structureof the graph through improved visualization. In this work,we introduce CRADLE (CompRessing grAph data with Domainindependent knowLEdge), a novel method based onknowledge rule called netting, which reports the number ofexternal networks for each instance of the substructure. Byfinding such substructures with more number of external networkswe can judiciously improve the compression rate. Weempirically evaluate our approach using synthetic as well asreal-world datasets. We compare CRADLE with baseline approaches.Our proposed approach is comparable in compressionrate, search space, and runtimes to other well-knowngraph mining approaches.


2021 ◽  
Author(s):  
John P Grogan ◽  
Govind Randhawa ◽  
Minho Kim ◽  
Sanjay G Manohar

Motivation can improve performance when the potential rewards outweigh the cost of effort expended. In working memory (WM), people can prioritise rewarded items at the expense of unrewarded items, suggesting a fixed memory capacity. But can capacity itself increase with motivation? Across four experiments (N = 30-34) we demonstrate motivational improvements in WM even when all items were rewarded. However, this was not due to better memory precision, but rather better selection of the probed item within memory. Motivational improvements operated independently of encoding, maintenance, or attention shifts between items in memory. Moreover, motivation slowed responses. This contrasted with the benefits of rewarding items unequally, which allowed prioritisation of one item over another. We conclude that motivation can improve memory recall, not via precision or capacity, but via speed-accuracy trade-offs when selecting the item to retrieve.


1997 ◽  
Vol 5 (3) ◽  
pp. 277-302 ◽  
Author(s):  
Anne M. Raich ◽  
Jamshid Ghaboussi

A new representation combining redundancy and implicit fitness constraints is introduced that performs better than a simple genetic algorithm (GA) and a structured GA in experiments. The implicit redundant representation (IRR) consists of a string that is over-specified, allowing for sections of the string to remain inactive during function evaluation. The representation does not require the user to prespecify the number of parameters to evaluate or the location of these parameters within the string. This information is obtained implicitly by the fitness function during the GA operations. The good performance of the IRR can be attributed to several factors: less disruption of existing fit members due to the increased probability of crossovers and mutation affecting only redundant material; discovery of fit members through the conversion of redundant material into essential information; and the ability to enlarge or reduce the search space dynamically by varying the number of variables evaluated by the fitness function. The IRR GA provides a more biologically parallel representation that maintains a diverse population throughout the evolution process. In addition, the IRR provides the necessary flexibility to represent unstructured problem domains that do not have the explicit constraints required by fixed representations.


Author(s):  
Tiago da Silva Almeida ◽  
André Luiz Gomes de Freitas

Asynchronous finite state machines are of great interest because they use fewer transistors to manufacture them. However, find the minimum resources in logic modeling is an NP-Problem, since the search space increase exponentially. Although this problem is studied for some time, there is still space for newer researches. Mostly, in new computational methods to improve performance in solving logical optimization in finite state machines. Thus, this paper presents a study and evaluation of heuristic algorithms for the optimization of asynchronous finite state machines, to obtain the smallest possible circuit. Thus, the Clause-Column Table and Quine-McCluskey algorithms are combined in order to propose an algorithm capable of minimizing asynchronous sequential circuits. Tests and results show that it is possible to synthesize circuits in a reasonable time, but with some logical errors. It may be concluded that it still needs research, even though it is not such a recent line of research.


Sign in / Sign up

Export Citation Format

Share Document